Jan 29 10:58:52 crc systemd[1]: Starting Kubernetes Kubelet... Jan 29 10:58:52 crc restorecon[4588]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 29 10:58:52 crc restorecon[4588]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 10:58:52 crc restorecon[4588]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 10:58:52 crc restorecon[4588]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 10:58:52 crc restorecon[4588]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 10:58:52 crc restorecon[4588]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 10:58:52 crc restorecon[4588]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 10:58:52 crc restorecon[4588]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 10:58:52 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 10:58:52 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 10:58:52 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 10:58:52 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 10:58:52 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 10:58:52 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 10:58:52 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 10:58:52 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 10:58:52 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 10:58:52 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 10:58:52 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 10:58:52 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 10:58:52 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 10:58:52 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 10:58:52 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 10:58:53 crc restorecon[4588]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 10:58:53 crc restorecon[4588]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 29 10:58:54 crc kubenswrapper[4593]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 10:58:54 crc kubenswrapper[4593]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 29 10:58:54 crc kubenswrapper[4593]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 10:58:54 crc kubenswrapper[4593]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 10:58:54 crc kubenswrapper[4593]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 10:58:54 crc kubenswrapper[4593]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.605413 4593 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610389 4593 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610424 4593 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610434 4593 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610443 4593 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610452 4593 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610464 4593 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610474 4593 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610490 4593 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610501 4593 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610511 4593 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610521 4593 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610529 4593 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610537 4593 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610545 4593 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610553 4593 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610561 4593 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610568 4593 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610576 4593 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610584 4593 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610591 4593 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610599 4593 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610610 4593 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610620 4593 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610659 4593 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610668 4593 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610677 4593 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610687 4593 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610697 4593 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610708 4593 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610717 4593 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610725 4593 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610733 4593 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610740 4593 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610748 4593 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610756 4593 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610764 4593 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610771 4593 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610779 4593 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610786 4593 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610794 4593 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610802 4593 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610810 4593 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610817 4593 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610832 4593 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610840 4593 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610848 4593 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610855 4593 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610864 4593 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610871 4593 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610880 4593 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610887 4593 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610895 4593 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610903 4593 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610910 4593 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610917 4593 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610925 4593 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610933 4593 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610941 4593 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610948 4593 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610955 4593 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610963 4593 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610970 4593 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610981 4593 feature_gate.go:330] unrecognized feature gate: Example Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610989 4593 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.610996 4593 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.611004 4593 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.611011 4593 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.611018 4593 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.611026 4593 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.611034 4593 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.611042 4593 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612302 4593 flags.go:64] FLAG: --address="0.0.0.0" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612325 4593 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612341 4593 flags.go:64] FLAG: --anonymous-auth="true" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612352 4593 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612363 4593 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612372 4593 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612384 4593 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612395 4593 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612406 4593 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612415 4593 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612425 4593 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612434 4593 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612444 4593 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612453 4593 flags.go:64] FLAG: --cgroup-root="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612462 4593 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612471 4593 flags.go:64] FLAG: --client-ca-file="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612480 4593 flags.go:64] FLAG: --cloud-config="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612489 4593 flags.go:64] FLAG: --cloud-provider="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612497 4593 flags.go:64] FLAG: --cluster-dns="[]" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612509 4593 flags.go:64] FLAG: --cluster-domain="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612518 4593 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612528 4593 flags.go:64] FLAG: --config-dir="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612537 4593 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612547 4593 flags.go:64] FLAG: --container-log-max-files="5" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612562 4593 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612573 4593 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612584 4593 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612596 4593 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612608 4593 flags.go:64] FLAG: --contention-profiling="false" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612619 4593 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612661 4593 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612674 4593 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612686 4593 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612700 4593 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612712 4593 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612722 4593 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612733 4593 flags.go:64] FLAG: --enable-load-reader="false" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612745 4593 flags.go:64] FLAG: --enable-server="true" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612756 4593 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612772 4593 flags.go:64] FLAG: --event-burst="100" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612784 4593 flags.go:64] FLAG: --event-qps="50" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612794 4593 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612805 4593 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612816 4593 flags.go:64] FLAG: --eviction-hard="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612832 4593 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612843 4593 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612854 4593 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612866 4593 flags.go:64] FLAG: --eviction-soft="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612876 4593 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612887 4593 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612897 4593 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612908 4593 flags.go:64] FLAG: --experimental-mounter-path="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612918 4593 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612930 4593 flags.go:64] FLAG: --fail-swap-on="true" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612941 4593 flags.go:64] FLAG: --feature-gates="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612954 4593 flags.go:64] FLAG: --file-check-frequency="20s" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612966 4593 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612978 4593 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.612989 4593 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613000 4593 flags.go:64] FLAG: --healthz-port="10248" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613011 4593 flags.go:64] FLAG: --help="false" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613021 4593 flags.go:64] FLAG: --hostname-override="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613032 4593 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613044 4593 flags.go:64] FLAG: --http-check-frequency="20s" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613055 4593 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613066 4593 flags.go:64] FLAG: --image-credential-provider-config="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613077 4593 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613088 4593 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613099 4593 flags.go:64] FLAG: --image-service-endpoint="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613110 4593 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613121 4593 flags.go:64] FLAG: --kube-api-burst="100" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613133 4593 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613143 4593 flags.go:64] FLAG: --kube-api-qps="50" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613152 4593 flags.go:64] FLAG: --kube-reserved="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613161 4593 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613170 4593 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613179 4593 flags.go:64] FLAG: --kubelet-cgroups="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613187 4593 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613196 4593 flags.go:64] FLAG: --lock-file="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613204 4593 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613215 4593 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613227 4593 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613244 4593 flags.go:64] FLAG: --log-json-split-stream="false" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613255 4593 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613266 4593 flags.go:64] FLAG: --log-text-split-stream="false" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613278 4593 flags.go:64] FLAG: --logging-format="text" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613289 4593 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613301 4593 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613317 4593 flags.go:64] FLAG: --manifest-url="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613329 4593 flags.go:64] FLAG: --manifest-url-header="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613343 4593 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613355 4593 flags.go:64] FLAG: --max-open-files="1000000" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613369 4593 flags.go:64] FLAG: --max-pods="110" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613380 4593 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613391 4593 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613402 4593 flags.go:64] FLAG: --memory-manager-policy="None" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613414 4593 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613425 4593 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613437 4593 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613489 4593 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613514 4593 flags.go:64] FLAG: --node-status-max-images="50" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613525 4593 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613537 4593 flags.go:64] FLAG: --oom-score-adj="-999" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613549 4593 flags.go:64] FLAG: --pod-cidr="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613559 4593 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613577 4593 flags.go:64] FLAG: --pod-manifest-path="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613590 4593 flags.go:64] FLAG: --pod-max-pids="-1" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613602 4593 flags.go:64] FLAG: --pods-per-core="0" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613613 4593 flags.go:64] FLAG: --port="10250" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613625 4593 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613672 4593 flags.go:64] FLAG: --provider-id="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613684 4593 flags.go:64] FLAG: --qos-reserved="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613696 4593 flags.go:64] FLAG: --read-only-port="10255" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613707 4593 flags.go:64] FLAG: --register-node="true" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613719 4593 flags.go:64] FLAG: --register-schedulable="true" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613730 4593 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613752 4593 flags.go:64] FLAG: --registry-burst="10" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613762 4593 flags.go:64] FLAG: --registry-qps="5" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613771 4593 flags.go:64] FLAG: --reserved-cpus="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613779 4593 flags.go:64] FLAG: --reserved-memory="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613791 4593 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613800 4593 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613810 4593 flags.go:64] FLAG: --rotate-certificates="false" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613819 4593 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613827 4593 flags.go:64] FLAG: --runonce="false" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613836 4593 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613846 4593 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613855 4593 flags.go:64] FLAG: --seccomp-default="false" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613864 4593 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613872 4593 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613881 4593 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613890 4593 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613899 4593 flags.go:64] FLAG: --storage-driver-password="root" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613908 4593 flags.go:64] FLAG: --storage-driver-secure="false" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613917 4593 flags.go:64] FLAG: --storage-driver-table="stats" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613926 4593 flags.go:64] FLAG: --storage-driver-user="root" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613934 4593 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613943 4593 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613952 4593 flags.go:64] FLAG: --system-cgroups="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613961 4593 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613975 4593 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613984 4593 flags.go:64] FLAG: --tls-cert-file="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.613993 4593 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.614003 4593 flags.go:64] FLAG: --tls-min-version="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.614012 4593 flags.go:64] FLAG: --tls-private-key-file="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.614021 4593 flags.go:64] FLAG: --topology-manager-policy="none" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.614029 4593 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.614038 4593 flags.go:64] FLAG: --topology-manager-scope="container" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.614048 4593 flags.go:64] FLAG: --v="2" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.614059 4593 flags.go:64] FLAG: --version="false" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.614070 4593 flags.go:64] FLAG: --vmodule="" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.614088 4593 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.614098 4593 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614299 4593 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614309 4593 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614318 4593 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614326 4593 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614334 4593 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614343 4593 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614351 4593 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614358 4593 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614366 4593 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614374 4593 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614382 4593 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614391 4593 feature_gate.go:330] unrecognized feature gate: Example Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614399 4593 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614407 4593 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614414 4593 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614423 4593 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614431 4593 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614442 4593 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614451 4593 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614459 4593 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614467 4593 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614475 4593 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614486 4593 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614496 4593 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614504 4593 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614512 4593 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614521 4593 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614529 4593 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614537 4593 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614544 4593 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614552 4593 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614560 4593 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614567 4593 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614575 4593 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614582 4593 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614595 4593 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614605 4593 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614614 4593 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614622 4593 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614658 4593 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614668 4593 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614677 4593 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614684 4593 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614693 4593 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614700 4593 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614708 4593 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614715 4593 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614723 4593 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614732 4593 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614740 4593 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614747 4593 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614754 4593 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614762 4593 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614770 4593 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614777 4593 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614785 4593 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614792 4593 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614800 4593 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614808 4593 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614817 4593 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614824 4593 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614832 4593 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614840 4593 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614848 4593 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614855 4593 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614865 4593 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614874 4593 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614882 4593 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614891 4593 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614899 4593 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.614908 4593 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.617179 4593 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.633698 4593 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.633780 4593 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.633946 4593 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.633969 4593 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.633979 4593 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.633992 4593 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634003 4593 feature_gate.go:330] unrecognized feature gate: Example Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634012 4593 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634021 4593 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634029 4593 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634039 4593 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634050 4593 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634060 4593 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634069 4593 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634078 4593 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634087 4593 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634097 4593 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634106 4593 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634115 4593 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634124 4593 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634133 4593 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634142 4593 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634150 4593 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634159 4593 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634167 4593 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634175 4593 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634184 4593 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634192 4593 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634201 4593 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634209 4593 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634217 4593 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634226 4593 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634234 4593 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634243 4593 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634252 4593 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634261 4593 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634270 4593 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634278 4593 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634290 4593 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634302 4593 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634312 4593 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634322 4593 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634330 4593 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634339 4593 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634356 4593 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634367 4593 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634378 4593 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634387 4593 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634397 4593 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634406 4593 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634417 4593 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634427 4593 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634437 4593 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634445 4593 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634454 4593 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634463 4593 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634471 4593 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634480 4593 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634488 4593 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634497 4593 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634505 4593 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634514 4593 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634523 4593 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634532 4593 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634541 4593 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634549 4593 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634558 4593 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634567 4593 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634575 4593 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634584 4593 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634592 4593 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634600 4593 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634609 4593 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.634623 4593 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634895 4593 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634911 4593 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634920 4593 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634931 4593 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634940 4593 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634949 4593 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634958 4593 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.634967 4593 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635580 4593 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635592 4593 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635601 4593 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635610 4593 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635619 4593 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635658 4593 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635668 4593 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635679 4593 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635692 4593 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635702 4593 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635711 4593 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635721 4593 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635730 4593 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635738 4593 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635747 4593 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635758 4593 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635768 4593 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635777 4593 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635785 4593 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635793 4593 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635802 4593 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635811 4593 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635853 4593 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635865 4593 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635876 4593 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635886 4593 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635895 4593 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635906 4593 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635914 4593 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635924 4593 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635933 4593 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635942 4593 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635951 4593 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635960 4593 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635968 4593 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635976 4593 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635984 4593 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.635993 4593 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.636002 4593 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.636010 4593 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.636018 4593 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.636026 4593 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.636035 4593 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.636044 4593 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.636052 4593 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.636061 4593 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.636069 4593 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.636077 4593 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.636086 4593 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.636097 4593 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.636106 4593 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.636115 4593 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.636124 4593 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.636136 4593 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.636146 4593 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.636156 4593 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.636166 4593 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.636175 4593 feature_gate.go:330] unrecognized feature gate: Example Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.636185 4593 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.636193 4593 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.636202 4593 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.636210 4593 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.636218 4593 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.636231 4593 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.636526 4593 server.go:940] "Client rotation is on, will bootstrap in background" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.653107 4593 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.653270 4593 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.657186 4593 server.go:997] "Starting client certificate rotation" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.657222 4593 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.658491 4593 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-23 14:50:46.762307956 +0000 UTC Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.658614 4593 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.761426 4593 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.763564 4593 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 10:58:54 crc kubenswrapper[4593]: E0129 10:58:54.776083 4593 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.828120 4593 log.go:25] "Validated CRI v1 runtime API" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.927930 4593 log.go:25] "Validated CRI v1 image API" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.930225 4593 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.937736 4593 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-29-10-52-47-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.937777 4593 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.957457 4593 manager.go:217] Machine: {Timestamp:2026-01-29 10:58:54.953862516 +0000 UTC m=+0.826896747 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2799998 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:45084d3a-e241-4a9c-9dcd-e9b4966c3a23 BootID:670b3c30-a5d0-4b0c-bcf2-4664323fba7b Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:0e:74:b9 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:0e:74:b9 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:73:69:92 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:cf:b2:8a Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:5c:16:c6 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:78:a4:25 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:da:be:4f:f9:97:2a Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:0a:02:ab:ba:3b:b7 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.957837 4593 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.958042 4593 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.960706 4593 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.960973 4593 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.961096 4593 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.961352 4593 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.961376 4593 container_manager_linux.go:303] "Creating device plugin manager" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.965623 4593 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.965718 4593 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.965956 4593 state_mem.go:36] "Initialized new in-memory state store" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.966071 4593 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.984874 4593 kubelet.go:418] "Attempting to sync node with API server" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.984921 4593 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.984952 4593 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.984968 4593 kubelet.go:324] "Adding apiserver pod source" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.984989 4593 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.992287 4593 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 29 10:58:54 crc kubenswrapper[4593]: I0129 10:58:54.994125 4593 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.994239 4593 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Jan 29 10:58:54 crc kubenswrapper[4593]: E0129 10:58:54.994306 4593 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:58:54 crc kubenswrapper[4593]: W0129 10:58:54.994431 4593 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Jan 29 10:58:54 crc kubenswrapper[4593]: E0129 10:58:54.994464 4593 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.001773 4593 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.005793 4593 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.005848 4593 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.005859 4593 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.005868 4593 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.005881 4593 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.005891 4593 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.005900 4593 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.005914 4593 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.005924 4593 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.005949 4593 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.005971 4593 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.005981 4593 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.009883 4593 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.010520 4593 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.010575 4593 server.go:1280] "Started kubelet" Jan 29 10:58:55 crc systemd[1]: Started Kubernetes Kubelet. Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.012505 4593 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.012798 4593 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.013283 4593 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.015173 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.015209 4593 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.015291 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 21:20:17.898814467 +0000 UTC Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.016990 4593 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.017005 4593 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.017094 4593 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 10:58:55 crc kubenswrapper[4593]: E0129 10:58:55.017438 4593 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 10:58:55 crc kubenswrapper[4593]: E0129 10:58:55.017541 4593 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="200ms" Jan 29 10:58:55 crc kubenswrapper[4593]: W0129 10:58:55.018755 4593 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.018981 4593 factory.go:55] Registering systemd factory Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.019006 4593 factory.go:221] Registration of the systemd container factory successfully Jan 29 10:58:55 crc kubenswrapper[4593]: E0129 10:58:55.018944 4593 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.019368 4593 factory.go:153] Registering CRI-O factory Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.019402 4593 factory.go:221] Registration of the crio container factory successfully Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.019489 4593 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.019520 4593 factory.go:103] Registering Raw factory Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.019536 4593 manager.go:1196] Started watching for new ooms in manager Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.020218 4593 manager.go:319] Starting recovery of all containers Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.028061 4593 server.go:460] "Adding debug handlers to kubelet server" Jan 29 10:58:55 crc kubenswrapper[4593]: E0129 10:58:55.041911 4593 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f2e86c8e077e0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 10:58:55.010543584 +0000 UTC m=+0.883577775,LastTimestamp:2026-01-29 10:58:55.010543584 +0000 UTC m=+0.883577775,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045181 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045285 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045298 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045308 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045319 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045329 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045339 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045349 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045360 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045370 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045403 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045415 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045428 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045443 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045459 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045472 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045486 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045497 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045514 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045528 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045539 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045550 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045560 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045570 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045581 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045594 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045611 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045626 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045653 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045663 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045711 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045742 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045753 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045763 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045772 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045781 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045792 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045819 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045830 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045842 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045855 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045867 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045877 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045887 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045897 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045927 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045940 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045954 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045968 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045980 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.045990 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046000 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046016 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046032 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046046 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046059 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046069 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046079 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046089 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046101 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046112 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046124 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046136 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046150 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046164 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046174 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046183 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046194 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046207 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046220 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046231 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046242 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046252 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046263 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046275 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046286 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046298 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046311 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046329 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046344 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046363 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046375 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046388 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046403 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046414 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046439 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046449 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046460 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046476 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046490 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046503 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046516 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046530 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046542 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046553 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046565 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046580 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046592 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046605 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046617 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046653 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046667 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046680 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046693 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046738 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046753 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046768 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046781 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046793 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046806 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046976 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.046989 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047006 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047017 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047029 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047040 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047052 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047064 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047077 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047090 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047126 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047137 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047149 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047162 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047174 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047186 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047199 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047210 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047221 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047234 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047247 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047260 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047273 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047285 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047298 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047309 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047322 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047335 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047347 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047358 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047370 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047381 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047393 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047405 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047423 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047434 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047447 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047458 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047470 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047483 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047493 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047505 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047524 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047537 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047548 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047559 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047570 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047580 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047591 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047602 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047611 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047620 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047641 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047651 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047660 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047668 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047677 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047685 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047694 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047703 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047713 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047722 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047730 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047739 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047768 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047777 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047789 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047798 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047808 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047818 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047827 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047838 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047846 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047855 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047864 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047874 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.047883 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.048047 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.048057 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.048065 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.048074 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.048082 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.048092 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.048102 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.048111 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.048120 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.048128 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.048137 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.048146 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.048156 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.048165 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.048174 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.050912 4593 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.050966 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.050985 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.050998 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.051011 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.051023 4593 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.051037 4593 reconstruct.go:97] "Volume reconstruction finished" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.051047 4593 reconciler.go:26] "Reconciler: start to sync state" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.052811 4593 manager.go:324] Recovery completed Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.060023 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.061277 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.061307 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.061316 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.062592 4593 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.062612 4593 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.062647 4593 state_mem.go:36] "Initialized new in-memory state store" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.070625 4593 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.072500 4593 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.073289 4593 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.073375 4593 kubelet.go:2335] "Starting kubelet main sync loop" Jan 29 10:58:55 crc kubenswrapper[4593]: E0129 10:58:55.073712 4593 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 10:58:55 crc kubenswrapper[4593]: W0129 10:58:55.075509 4593 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Jan 29 10:58:55 crc kubenswrapper[4593]: E0129 10:58:55.075574 4593 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.098652 4593 policy_none.go:49] "None policy: Start" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.100125 4593 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.100175 4593 state_mem.go:35] "Initializing new in-memory state store" Jan 29 10:58:55 crc kubenswrapper[4593]: E0129 10:58:55.117658 4593 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.156931 4593 manager.go:334] "Starting Device Plugin manager" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.157014 4593 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.157030 4593 server.go:79] "Starting device plugin registration server" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.157432 4593 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.157444 4593 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.157726 4593 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.157827 4593 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.157835 4593 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 10:58:55 crc kubenswrapper[4593]: E0129 10:58:55.168681 4593 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.174773 4593 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.174870 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.179018 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.179057 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.179068 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.179217 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.179472 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.179510 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.180321 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.180345 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.180356 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.180376 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.180399 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.180407 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.180519 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.180682 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.180717 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.182152 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.183907 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.183936 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.182211 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.183969 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.183979 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.184114 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.184224 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.184273 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.185496 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.185518 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.185526 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.186563 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.186686 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.186744 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.186920 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.187700 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.187792 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.187876 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.187906 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.187917 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.188096 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.188128 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.188975 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.189137 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.189237 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.188994 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.189343 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.189355 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:58:55 crc kubenswrapper[4593]: E0129 10:58:55.218434 4593 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="400ms" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.254964 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.255013 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.255049 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.255069 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.255090 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.255110 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.255125 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.255142 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.255164 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.255182 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.255200 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.255218 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.255235 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.255253 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.255273 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.257871 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.258730 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.258754 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.258761 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.258778 4593 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 10:58:55 crc kubenswrapper[4593]: E0129 10:58:55.259162 4593 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.356101 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.356153 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.356178 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.356201 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.356225 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.356253 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.356272 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.356307 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.356326 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.356346 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.356367 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.356388 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.356408 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.356428 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.356450 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.357094 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.357174 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.357184 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.357200 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.357224 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.357247 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.357247 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.357271 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.357295 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.357332 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.357341 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.357347 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.357360 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.357361 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.357402 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.459756 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.461475 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.461537 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.461549 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.461584 4593 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 10:58:55 crc kubenswrapper[4593]: E0129 10:58:55.461994 4593 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.511830 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.536910 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.542958 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.560433 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: W0129 10:58:55.564044 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-557a8ef92ffdf25c80e21416ff8cfcb189780266309a9d0b77b25a2ee4190e6f WatchSource:0}: Error finding container 557a8ef92ffdf25c80e21416ff8cfcb189780266309a9d0b77b25a2ee4190e6f: Status 404 returned error can't find the container with id 557a8ef92ffdf25c80e21416ff8cfcb189780266309a9d0b77b25a2ee4190e6f Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.564358 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 10:58:55 crc kubenswrapper[4593]: W0129 10:58:55.595518 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-376c5edac488c7872863e1f0269d343d491dae107e2aba2e33bec6167a74fa59 WatchSource:0}: Error finding container 376c5edac488c7872863e1f0269d343d491dae107e2aba2e33bec6167a74fa59: Status 404 returned error can't find the container with id 376c5edac488c7872863e1f0269d343d491dae107e2aba2e33bec6167a74fa59 Jan 29 10:58:55 crc kubenswrapper[4593]: W0129 10:58:55.596095 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-ac8fff6c374d8e8e51d6d02fcbe1199ff1b4db64ad14c4712cc248ca87ed7577 WatchSource:0}: Error finding container ac8fff6c374d8e8e51d6d02fcbe1199ff1b4db64ad14c4712cc248ca87ed7577: Status 404 returned error can't find the container with id ac8fff6c374d8e8e51d6d02fcbe1199ff1b4db64ad14c4712cc248ca87ed7577 Jan 29 10:58:55 crc kubenswrapper[4593]: W0129 10:58:55.599003 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-b5bf5f068359763bbeb3d4640b681aadd8bd4c8aa0c5f2b679c2e29c7419298c WatchSource:0}: Error finding container b5bf5f068359763bbeb3d4640b681aadd8bd4c8aa0c5f2b679c2e29c7419298c: Status 404 returned error can't find the container with id b5bf5f068359763bbeb3d4640b681aadd8bd4c8aa0c5f2b679c2e29c7419298c Jan 29 10:58:55 crc kubenswrapper[4593]: E0129 10:58:55.619790 4593 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="800ms" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.862652 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.864149 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.864200 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.864219 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:58:55 crc kubenswrapper[4593]: I0129 10:58:55.864248 4593 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 10:58:55 crc kubenswrapper[4593]: E0129 10:58:55.864584 4593 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Jan 29 10:58:56 crc kubenswrapper[4593]: I0129 10:58:56.011847 4593 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Jan 29 10:58:56 crc kubenswrapper[4593]: I0129 10:58:56.015901 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 09:23:29.187579956 +0000 UTC Jan 29 10:58:56 crc kubenswrapper[4593]: I0129 10:58:56.076942 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"557a8ef92ffdf25c80e21416ff8cfcb189780266309a9d0b77b25a2ee4190e6f"} Jan 29 10:58:56 crc kubenswrapper[4593]: I0129 10:58:56.077585 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"376c5edac488c7872863e1f0269d343d491dae107e2aba2e33bec6167a74fa59"} Jan 29 10:58:56 crc kubenswrapper[4593]: I0129 10:58:56.078545 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b5bf5f068359763bbeb3d4640b681aadd8bd4c8aa0c5f2b679c2e29c7419298c"} Jan 29 10:58:56 crc kubenswrapper[4593]: I0129 10:58:56.079340 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"ac8fff6c374d8e8e51d6d02fcbe1199ff1b4db64ad14c4712cc248ca87ed7577"} Jan 29 10:58:56 crc kubenswrapper[4593]: I0129 10:58:56.080335 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6dfc67a8938a6ba61e4f775bd391e9da2837c1581d867676b06b0ec2c85f1aa0"} Jan 29 10:58:56 crc kubenswrapper[4593]: W0129 10:58:56.307593 4593 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Jan 29 10:58:56 crc kubenswrapper[4593]: E0129 10:58:56.308024 4593 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:58:56 crc kubenswrapper[4593]: E0129 10:58:56.421091 4593 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="1.6s" Jan 29 10:58:56 crc kubenswrapper[4593]: W0129 10:58:56.469133 4593 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Jan 29 10:58:56 crc kubenswrapper[4593]: E0129 10:58:56.469214 4593 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:58:56 crc kubenswrapper[4593]: W0129 10:58:56.469132 4593 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Jan 29 10:58:56 crc kubenswrapper[4593]: E0129 10:58:56.469256 4593 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:58:56 crc kubenswrapper[4593]: W0129 10:58:56.578135 4593 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Jan 29 10:58:56 crc kubenswrapper[4593]: E0129 10:58:56.578212 4593 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:58:56 crc kubenswrapper[4593]: I0129 10:58:56.665007 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:58:56 crc kubenswrapper[4593]: I0129 10:58:56.666658 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:58:56 crc kubenswrapper[4593]: I0129 10:58:56.666698 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:58:56 crc kubenswrapper[4593]: I0129 10:58:56.666710 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:58:56 crc kubenswrapper[4593]: I0129 10:58:56.666737 4593 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 10:58:56 crc kubenswrapper[4593]: E0129 10:58:56.667161 4593 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Jan 29 10:58:56 crc kubenswrapper[4593]: I0129 10:58:56.788970 4593 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 10:58:56 crc kubenswrapper[4593]: E0129 10:58:56.790104 4593 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.012093 4593 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.016214 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 12:09:42.460838705 +0000 UTC Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.084848 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889"} Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.084905 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542"} Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.084915 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0"} Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.086361 4593 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="4ceaedbdaa29ed0b2a6acd11520740f317d68596cd0f13c586849370b73c6416" exitCode=0 Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.086403 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"4ceaedbdaa29ed0b2a6acd11520740f317d68596cd0f13c586849370b73c6416"} Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.086458 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.087533 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.087558 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.087570 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.088039 4593 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece" exitCode=0 Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.088098 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece"} Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.088137 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.088852 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.088879 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.088889 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.089976 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.090331 4593 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="d58235ff8efa3285de647904b309802e9e59de3498d59d86437eae4b9afa2ad1" exitCode=0 Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.090380 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.090399 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"d58235ff8efa3285de647904b309802e9e59de3498d59d86437eae4b9afa2ad1"} Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.090972 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.090989 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.090997 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.091222 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.091244 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.091255 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.097900 4593 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe" exitCode=0 Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.097940 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe"} Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.098056 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.098931 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.098986 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:58:57 crc kubenswrapper[4593]: I0129 10:58:57.098998 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:58:58 crc kubenswrapper[4593]: I0129 10:58:58.012081 4593 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Jan 29 10:58:58 crc kubenswrapper[4593]: I0129 10:58:58.017248 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 16:16:05.92463265 +0000 UTC Jan 29 10:58:58 crc kubenswrapper[4593]: E0129 10:58:58.022199 4593 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="3.2s" Jan 29 10:58:58 crc kubenswrapper[4593]: I0129 10:58:58.109369 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"770eac823720571be84970ca91371624bf9a1ef60d4c0ea4dc0011cb1319aa18"} Jan 29 10:58:58 crc kubenswrapper[4593]: I0129 10:58:58.113063 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:58:58 crc kubenswrapper[4593]: I0129 10:58:58.113057 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a"} Jan 29 10:58:58 crc kubenswrapper[4593]: I0129 10:58:58.114016 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:58:58 crc kubenswrapper[4593]: I0129 10:58:58.114064 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:58:58 crc kubenswrapper[4593]: I0129 10:58:58.114076 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:58:58 crc kubenswrapper[4593]: I0129 10:58:58.116024 4593 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="4e33ebd48124c2f6d1d86a91ae435aa3be322f292a3cbf62c0ce4357438a98f4" exitCode=0 Jan 29 10:58:58 crc kubenswrapper[4593]: I0129 10:58:58.116088 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"4e33ebd48124c2f6d1d86a91ae435aa3be322f292a3cbf62c0ce4357438a98f4"} Jan 29 10:58:58 crc kubenswrapper[4593]: I0129 10:58:58.116167 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:58:58 crc kubenswrapper[4593]: I0129 10:58:58.117542 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:58:58 crc kubenswrapper[4593]: I0129 10:58:58.117581 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:58:58 crc kubenswrapper[4593]: I0129 10:58:58.117592 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:58:58 crc kubenswrapper[4593]: I0129 10:58:58.118168 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9"} Jan 29 10:58:58 crc kubenswrapper[4593]: I0129 10:58:58.120425 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"96af555718c85d958e5e6ff04df0c2a39cf2a2d90ed75aa8ce3de1aeccd58ff2"} Jan 29 10:58:58 crc kubenswrapper[4593]: I0129 10:58:58.120490 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:58:58 crc kubenswrapper[4593]: I0129 10:58:58.121369 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:58:58 crc kubenswrapper[4593]: I0129 10:58:58.121396 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:58:58 crc kubenswrapper[4593]: I0129 10:58:58.121407 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:58:58 crc kubenswrapper[4593]: I0129 10:58:58.268076 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:58:58 crc kubenswrapper[4593]: I0129 10:58:58.270526 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:58:58 crc kubenswrapper[4593]: I0129 10:58:58.270554 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:58:58 crc kubenswrapper[4593]: I0129 10:58:58.270564 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:58:58 crc kubenswrapper[4593]: I0129 10:58:58.270587 4593 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 10:58:58 crc kubenswrapper[4593]: E0129 10:58:58.271009 4593 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Jan 29 10:58:58 crc kubenswrapper[4593]: W0129 10:58:58.751945 4593 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Jan 29 10:58:58 crc kubenswrapper[4593]: E0129 10:58:58.752024 4593 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:58:59 crc kubenswrapper[4593]: I0129 10:58:59.011563 4593 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Jan 29 10:58:59 crc kubenswrapper[4593]: I0129 10:58:59.017648 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 13:51:10.108797239 +0000 UTC Jan 29 10:58:59 crc kubenswrapper[4593]: W0129 10:58:59.122097 4593 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Jan 29 10:58:59 crc kubenswrapper[4593]: E0129 10:58:59.122234 4593 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:58:59 crc kubenswrapper[4593]: I0129 10:58:59.126621 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"47f750a8d01af88118b5ba0f1743bb4357e5eff487d231fdb6962b1a151d898c"} Jan 29 10:58:59 crc kubenswrapper[4593]: I0129 10:58:59.126672 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a"} Jan 29 10:58:59 crc kubenswrapper[4593]: I0129 10:58:59.126683 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3"} Jan 29 10:58:59 crc kubenswrapper[4593]: I0129 10:58:59.126692 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264"} Jan 29 10:58:59 crc kubenswrapper[4593]: I0129 10:58:59.126771 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:58:59 crc kubenswrapper[4593]: I0129 10:58:59.128105 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:58:59 crc kubenswrapper[4593]: I0129 10:58:59.128137 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:58:59 crc kubenswrapper[4593]: I0129 10:58:59.128149 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:58:59 crc kubenswrapper[4593]: I0129 10:58:59.134486 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"100535b62a75f14594466d97f789106e9a51f35605ef3250a2b2e067568e6d85"} Jan 29 10:58:59 crc kubenswrapper[4593]: I0129 10:58:59.134535 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"06898c0c80943cfb41dfb8b2f126694ec289f605b86e24c7df0bf68a15c4ee7e"} Jan 29 10:58:59 crc kubenswrapper[4593]: I0129 10:58:59.134549 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:58:59 crc kubenswrapper[4593]: I0129 10:58:59.135412 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:58:59 crc kubenswrapper[4593]: I0129 10:58:59.135444 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:58:59 crc kubenswrapper[4593]: I0129 10:58:59.135454 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:58:59 crc kubenswrapper[4593]: I0129 10:58:59.136423 4593 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e734fd0a08b7600b8085767254e49062527175e3832640cdea5e1e5e44768e0a" exitCode=0 Jan 29 10:58:59 crc kubenswrapper[4593]: I0129 10:58:59.136497 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:58:59 crc kubenswrapper[4593]: I0129 10:58:59.136501 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e734fd0a08b7600b8085767254e49062527175e3832640cdea5e1e5e44768e0a"} Jan 29 10:58:59 crc kubenswrapper[4593]: I0129 10:58:59.136606 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:58:59 crc kubenswrapper[4593]: I0129 10:58:59.136748 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:58:59 crc kubenswrapper[4593]: I0129 10:58:59.140822 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:58:59 crc kubenswrapper[4593]: I0129 10:58:59.140854 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:58:59 crc kubenswrapper[4593]: I0129 10:58:59.140865 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:58:59 crc kubenswrapper[4593]: I0129 10:58:59.140877 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:58:59 crc kubenswrapper[4593]: I0129 10:58:59.140896 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:58:59 crc kubenswrapper[4593]: I0129 10:58:59.140903 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:58:59 crc kubenswrapper[4593]: I0129 10:58:59.141292 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:58:59 crc kubenswrapper[4593]: I0129 10:58:59.141328 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:58:59 crc kubenswrapper[4593]: I0129 10:58:59.141339 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:58:59 crc kubenswrapper[4593]: W0129 10:58:59.317598 4593 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Jan 29 10:58:59 crc kubenswrapper[4593]: E0129 10:58:59.317704 4593 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:58:59 crc kubenswrapper[4593]: W0129 10:58:59.361017 4593 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Jan 29 10:58:59 crc kubenswrapper[4593]: E0129 10:58:59.361084 4593 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:59:00 crc kubenswrapper[4593]: I0129 10:59:00.017928 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 12:54:14.751707782 +0000 UTC Jan 29 10:59:00 crc kubenswrapper[4593]: I0129 10:59:00.117086 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 10:59:00 crc kubenswrapper[4593]: I0129 10:59:00.125238 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 10:59:00 crc kubenswrapper[4593]: I0129 10:59:00.142780 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4622e0ea4b36a073e3911d17775945bb6f83746e5cd322462a059d6c68341ebb"} Jan 29 10:59:00 crc kubenswrapper[4593]: I0129 10:59:00.142821 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7dd33879d2a15ddb8015aefff1e36c734a8c8c0a658456e660709c4e9741e4f7"} Jan 29 10:59:00 crc kubenswrapper[4593]: I0129 10:59:00.142830 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ba22abee03c6943bfa869a3948e6eec6a215e76fbc1c0a08ee259a8a9f78e035"} Jan 29 10:59:00 crc kubenswrapper[4593]: I0129 10:59:00.142838 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"079a4bda3084005c12130674c98b9cd0083a81697a2a1a8f620299e27bdfa2f1"} Jan 29 10:59:00 crc kubenswrapper[4593]: I0129 10:59:00.144426 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 10:59:00 crc kubenswrapper[4593]: I0129 10:59:00.146510 4593 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="47f750a8d01af88118b5ba0f1743bb4357e5eff487d231fdb6962b1a151d898c" exitCode=255 Jan 29 10:59:00 crc kubenswrapper[4593]: I0129 10:59:00.146565 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"47f750a8d01af88118b5ba0f1743bb4357e5eff487d231fdb6962b1a151d898c"} Jan 29 10:59:00 crc kubenswrapper[4593]: I0129 10:59:00.146610 4593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 10:59:00 crc kubenswrapper[4593]: I0129 10:59:00.146625 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:59:00 crc kubenswrapper[4593]: I0129 10:59:00.146668 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:59:00 crc kubenswrapper[4593]: I0129 10:59:00.146713 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:59:00 crc kubenswrapper[4593]: I0129 10:59:00.147709 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:00 crc kubenswrapper[4593]: I0129 10:59:00.147727 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:00 crc kubenswrapper[4593]: I0129 10:59:00.147735 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:00 crc kubenswrapper[4593]: I0129 10:59:00.148193 4593 scope.go:117] "RemoveContainer" containerID="47f750a8d01af88118b5ba0f1743bb4357e5eff487d231fdb6962b1a151d898c" Jan 29 10:59:00 crc kubenswrapper[4593]: I0129 10:59:00.148509 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:00 crc kubenswrapper[4593]: I0129 10:59:00.148525 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:00 crc kubenswrapper[4593]: I0129 10:59:00.148532 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:00 crc kubenswrapper[4593]: I0129 10:59:00.148560 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:00 crc kubenswrapper[4593]: I0129 10:59:00.148580 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:00 crc kubenswrapper[4593]: I0129 10:59:00.148588 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:00 crc kubenswrapper[4593]: I0129 10:59:00.966699 4593 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 10:59:01 crc kubenswrapper[4593]: I0129 10:59:01.006397 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 10:59:01 crc kubenswrapper[4593]: I0129 10:59:01.018039 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 01:16:09.509010209 +0000 UTC Jan 29 10:59:01 crc kubenswrapper[4593]: I0129 10:59:01.150970 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 10:59:01 crc kubenswrapper[4593]: I0129 10:59:01.152864 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709"} Jan 29 10:59:01 crc kubenswrapper[4593]: I0129 10:59:01.153021 4593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 10:59:01 crc kubenswrapper[4593]: I0129 10:59:01.153096 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:59:01 crc kubenswrapper[4593]: I0129 10:59:01.154250 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:01 crc kubenswrapper[4593]: I0129 10:59:01.154277 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:01 crc kubenswrapper[4593]: I0129 10:59:01.154294 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:01 crc kubenswrapper[4593]: I0129 10:59:01.156332 4593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 10:59:01 crc kubenswrapper[4593]: I0129 10:59:01.156368 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:59:01 crc kubenswrapper[4593]: I0129 10:59:01.156381 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:59:01 crc kubenswrapper[4593]: I0129 10:59:01.156392 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"df10dd904fcd69db3d8b0f33a93ed03acf3280b16ab94ee18a9a98a51ad00e8f"} Jan 29 10:59:01 crc kubenswrapper[4593]: I0129 10:59:01.157209 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:01 crc kubenswrapper[4593]: I0129 10:59:01.157232 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:01 crc kubenswrapper[4593]: I0129 10:59:01.157230 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:01 crc kubenswrapper[4593]: I0129 10:59:01.157262 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:01 crc kubenswrapper[4593]: I0129 10:59:01.157276 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:01 crc kubenswrapper[4593]: I0129 10:59:01.157244 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:01 crc kubenswrapper[4593]: I0129 10:59:01.244857 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 10:59:01 crc kubenswrapper[4593]: I0129 10:59:01.245097 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:59:01 crc kubenswrapper[4593]: I0129 10:59:01.246208 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:01 crc kubenswrapper[4593]: I0129 10:59:01.246246 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:01 crc kubenswrapper[4593]: I0129 10:59:01.246257 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:01 crc kubenswrapper[4593]: I0129 10:59:01.471789 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:59:01 crc kubenswrapper[4593]: I0129 10:59:01.473443 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:01 crc kubenswrapper[4593]: I0129 10:59:01.473513 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:01 crc kubenswrapper[4593]: I0129 10:59:01.473531 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:01 crc kubenswrapper[4593]: I0129 10:59:01.473566 4593 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 10:59:02 crc kubenswrapper[4593]: I0129 10:59:02.019129 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 00:23:48.448048768 +0000 UTC Jan 29 10:59:02 crc kubenswrapper[4593]: I0129 10:59:02.159403 4593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 10:59:02 crc kubenswrapper[4593]: I0129 10:59:02.159463 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:59:02 crc kubenswrapper[4593]: I0129 10:59:02.159469 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:59:02 crc kubenswrapper[4593]: I0129 10:59:02.160674 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:02 crc kubenswrapper[4593]: I0129 10:59:02.160714 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:02 crc kubenswrapper[4593]: I0129 10:59:02.160724 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:02 crc kubenswrapper[4593]: I0129 10:59:02.160723 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:02 crc kubenswrapper[4593]: I0129 10:59:02.160893 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:02 crc kubenswrapper[4593]: I0129 10:59:02.160913 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:02 crc kubenswrapper[4593]: I0129 10:59:02.405786 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 10:59:02 crc kubenswrapper[4593]: I0129 10:59:02.405938 4593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 10:59:02 crc kubenswrapper[4593]: I0129 10:59:02.405977 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:59:02 crc kubenswrapper[4593]: I0129 10:59:02.406942 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:02 crc kubenswrapper[4593]: I0129 10:59:02.406977 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:02 crc kubenswrapper[4593]: I0129 10:59:02.406988 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:02 crc kubenswrapper[4593]: I0129 10:59:02.657175 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 29 10:59:02 crc kubenswrapper[4593]: I0129 10:59:02.977247 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 10:59:03 crc kubenswrapper[4593]: I0129 10:59:03.019410 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 22:37:11.950649634 +0000 UTC Jan 29 10:59:03 crc kubenswrapper[4593]: I0129 10:59:03.160767 4593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 10:59:03 crc kubenswrapper[4593]: I0129 10:59:03.160810 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:59:03 crc kubenswrapper[4593]: I0129 10:59:03.160945 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:59:03 crc kubenswrapper[4593]: I0129 10:59:03.161761 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:03 crc kubenswrapper[4593]: I0129 10:59:03.161814 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:03 crc kubenswrapper[4593]: I0129 10:59:03.161831 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:03 crc kubenswrapper[4593]: I0129 10:59:03.162179 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:03 crc kubenswrapper[4593]: I0129 10:59:03.162295 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:03 crc kubenswrapper[4593]: I0129 10:59:03.162371 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:03 crc kubenswrapper[4593]: I0129 10:59:03.173931 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 10:59:03 crc kubenswrapper[4593]: I0129 10:59:03.174396 4593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 10:59:03 crc kubenswrapper[4593]: I0129 10:59:03.174560 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:59:03 crc kubenswrapper[4593]: I0129 10:59:03.175760 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:03 crc kubenswrapper[4593]: I0129 10:59:03.175829 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:03 crc kubenswrapper[4593]: I0129 10:59:03.175855 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:03 crc kubenswrapper[4593]: I0129 10:59:03.862978 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 29 10:59:04 crc kubenswrapper[4593]: I0129 10:59:04.020434 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 14:12:55.488041674 +0000 UTC Jan 29 10:59:04 crc kubenswrapper[4593]: I0129 10:59:04.163162 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:59:04 crc kubenswrapper[4593]: I0129 10:59:04.164116 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:04 crc kubenswrapper[4593]: I0129 10:59:04.164243 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:04 crc kubenswrapper[4593]: I0129 10:59:04.164328 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:04 crc kubenswrapper[4593]: I0129 10:59:04.249292 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 10:59:04 crc kubenswrapper[4593]: I0129 10:59:04.249680 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:59:04 crc kubenswrapper[4593]: I0129 10:59:04.250803 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:04 crc kubenswrapper[4593]: I0129 10:59:04.250855 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:04 crc kubenswrapper[4593]: I0129 10:59:04.250867 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:05 crc kubenswrapper[4593]: I0129 10:59:05.021290 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 08:37:41.392321269 +0000 UTC Jan 29 10:59:05 crc kubenswrapper[4593]: E0129 10:59:05.168814 4593 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 10:59:05 crc kubenswrapper[4593]: I0129 10:59:05.299993 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 10:59:05 crc kubenswrapper[4593]: I0129 10:59:05.300162 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:59:05 crc kubenswrapper[4593]: I0129 10:59:05.301198 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:05 crc kubenswrapper[4593]: I0129 10:59:05.301241 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:05 crc kubenswrapper[4593]: I0129 10:59:05.301250 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:06 crc kubenswrapper[4593]: I0129 10:59:06.021694 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 17:17:01.887299845 +0000 UTC Jan 29 10:59:06 crc kubenswrapper[4593]: I0129 10:59:06.173972 4593 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 10:59:06 crc kubenswrapper[4593]: I0129 10:59:06.174050 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 10:59:07 crc kubenswrapper[4593]: I0129 10:59:07.022464 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 20:09:00.561682577 +0000 UTC Jan 29 10:59:08 crc kubenswrapper[4593]: I0129 10:59:08.023399 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 14:11:03.294397361 +0000 UTC Jan 29 10:59:09 crc kubenswrapper[4593]: I0129 10:59:09.024155 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 12:13:22.014007418 +0000 UTC Jan 29 10:59:10 crc kubenswrapper[4593]: I0129 10:59:10.012474 4593 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 29 10:59:10 crc kubenswrapper[4593]: I0129 10:59:10.025393 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 05:23:13.588176663 +0000 UTC Jan 29 10:59:10 crc kubenswrapper[4593]: I0129 10:59:10.112337 4593 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 10:59:10 crc kubenswrapper[4593]: I0129 10:59:10.112398 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 10:59:10 crc kubenswrapper[4593]: I0129 10:59:10.119092 4593 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 10:59:10 crc kubenswrapper[4593]: I0129 10:59:10.119150 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 10:59:11 crc kubenswrapper[4593]: I0129 10:59:11.026479 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 05:58:40.376839086 +0000 UTC Jan 29 10:59:12 crc kubenswrapper[4593]: I0129 10:59:12.026951 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 07:02:50.100570822 +0000 UTC Jan 29 10:59:12 crc kubenswrapper[4593]: I0129 10:59:12.987534 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 10:59:12 crc kubenswrapper[4593]: I0129 10:59:12.987806 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:59:12 crc kubenswrapper[4593]: I0129 10:59:12.988912 4593 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 29 10:59:12 crc kubenswrapper[4593]: I0129 10:59:12.989008 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 29 10:59:12 crc kubenswrapper[4593]: I0129 10:59:12.989841 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:12 crc kubenswrapper[4593]: I0129 10:59:12.989884 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:12 crc kubenswrapper[4593]: I0129 10:59:12.989899 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:12 crc kubenswrapper[4593]: I0129 10:59:12.993169 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 10:59:13 crc kubenswrapper[4593]: I0129 10:59:13.028004 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 06:15:34.909494307 +0000 UTC Jan 29 10:59:13 crc kubenswrapper[4593]: I0129 10:59:13.185911 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:59:13 crc kubenswrapper[4593]: I0129 10:59:13.187675 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:13 crc kubenswrapper[4593]: I0129 10:59:13.187836 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:13 crc kubenswrapper[4593]: I0129 10:59:13.187971 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:13 crc kubenswrapper[4593]: I0129 10:59:13.187888 4593 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 29 10:59:13 crc kubenswrapper[4593]: I0129 10:59:13.188310 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 29 10:59:13 crc kubenswrapper[4593]: I0129 10:59:13.893773 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 29 10:59:13 crc kubenswrapper[4593]: I0129 10:59:13.893917 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:59:13 crc kubenswrapper[4593]: I0129 10:59:13.895287 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:13 crc kubenswrapper[4593]: I0129 10:59:13.895460 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:13 crc kubenswrapper[4593]: I0129 10:59:13.895591 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:13 crc kubenswrapper[4593]: I0129 10:59:13.908559 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 29 10:59:14 crc kubenswrapper[4593]: I0129 10:59:14.028979 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 19:39:38.252418976 +0000 UTC Jan 29 10:59:14 crc kubenswrapper[4593]: I0129 10:59:14.187777 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:59:14 crc kubenswrapper[4593]: I0129 10:59:14.189575 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:14 crc kubenswrapper[4593]: I0129 10:59:14.189619 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:14 crc kubenswrapper[4593]: I0129 10:59:14.189660 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:14 crc kubenswrapper[4593]: I0129 10:59:14.254283 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 10:59:14 crc kubenswrapper[4593]: I0129 10:59:14.254506 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:59:14 crc kubenswrapper[4593]: I0129 10:59:14.255728 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:14 crc kubenswrapper[4593]: I0129 10:59:14.255766 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:14 crc kubenswrapper[4593]: I0129 10:59:14.255777 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:15 crc kubenswrapper[4593]: I0129 10:59:15.030363 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 03:41:03.209408457 +0000 UTC Jan 29 10:59:15 crc kubenswrapper[4593]: E0129 10:59:15.116310 4593 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 29 10:59:15 crc kubenswrapper[4593]: I0129 10:59:15.120189 4593 trace.go:236] Trace[1697639791]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 10:59:04.026) (total time: 11093ms): Jan 29 10:59:15 crc kubenswrapper[4593]: Trace[1697639791]: ---"Objects listed" error: 11093ms (10:59:15.120) Jan 29 10:59:15 crc kubenswrapper[4593]: Trace[1697639791]: [11.093229993s] [11.093229993s] END Jan 29 10:59:15 crc kubenswrapper[4593]: I0129 10:59:15.120218 4593 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 10:59:15 crc kubenswrapper[4593]: I0129 10:59:15.121402 4593 trace.go:236] Trace[1229695765]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 10:59:02.458) (total time: 12662ms): Jan 29 10:59:15 crc kubenswrapper[4593]: Trace[1229695765]: ---"Objects listed" error: 12662ms (10:59:15.121) Jan 29 10:59:15 crc kubenswrapper[4593]: Trace[1229695765]: [12.662952161s] [12.662952161s] END Jan 29 10:59:15 crc kubenswrapper[4593]: I0129 10:59:15.121433 4593 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 10:59:15 crc kubenswrapper[4593]: I0129 10:59:15.121488 4593 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 29 10:59:15 crc kubenswrapper[4593]: I0129 10:59:15.121556 4593 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 10:59:15 crc kubenswrapper[4593]: I0129 10:59:15.123101 4593 trace.go:236] Trace[522285009]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 10:59:04.169) (total time: 10953ms): Jan 29 10:59:15 crc kubenswrapper[4593]: Trace[522285009]: ---"Objects listed" error: 10953ms (10:59:15.122) Jan 29 10:59:15 crc kubenswrapper[4593]: Trace[522285009]: [10.953750528s] [10.953750528s] END Jan 29 10:59:15 crc kubenswrapper[4593]: I0129 10:59:15.123123 4593 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 10:59:15 crc kubenswrapper[4593]: E0129 10:59:15.124373 4593 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 29 10:59:15 crc kubenswrapper[4593]: I0129 10:59:15.130128 4593 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 10:59:15 crc kubenswrapper[4593]: I0129 10:59:15.206619 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 10:59:15 crc kubenswrapper[4593]: I0129 10:59:15.213863 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 10:59:15 crc kubenswrapper[4593]: I0129 10:59:15.256371 4593 csr.go:261] certificate signing request csr-pdwsj is approved, waiting to be issued Jan 29 10:59:15 crc kubenswrapper[4593]: I0129 10:59:15.284275 4593 csr.go:257] certificate signing request csr-pdwsj is issued Jan 29 10:59:15 crc kubenswrapper[4593]: I0129 10:59:15.610679 4593 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36752->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 29 10:59:15 crc kubenswrapper[4593]: I0129 10:59:15.610758 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36752->192.168.126.11:17697: read: connection reset by peer" Jan 29 10:59:15 crc kubenswrapper[4593]: I0129 10:59:15.998884 4593 apiserver.go:52] "Watching apiserver" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.002399 4593 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.002778 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.003260 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.003369 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.003483 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.003785 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.003880 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.004087 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.004157 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.004250 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.004462 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.008329 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.008702 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.009156 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.009877 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.010215 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.011914 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.012156 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.012489 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.015996 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.017416 4593 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.026705 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.026746 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.026768 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.026783 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.026799 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.026819 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.026834 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.026850 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.026865 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.026880 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.026974 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.026994 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027010 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027028 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027043 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027058 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027072 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027089 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027103 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027118 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027151 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027169 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027184 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027200 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027215 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027232 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027248 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027264 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027281 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027300 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027316 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027332 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027349 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027381 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027381 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027399 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027418 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027434 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027451 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027472 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027487 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027502 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027518 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027548 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027563 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027582 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027598 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027613 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027677 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027694 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027710 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027725 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027740 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027754 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027769 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027783 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027818 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027834 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027848 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027864 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027878 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027894 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027900 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027909 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027925 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027942 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027957 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027972 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027987 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028004 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028019 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028034 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028051 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028066 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028081 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028097 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028112 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028128 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028143 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028158 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028165 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028175 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028191 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028206 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028223 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028221 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028241 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028259 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028273 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028291 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028307 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028323 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028337 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028353 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028367 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028382 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028398 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028413 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028435 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028451 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028467 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028482 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028497 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028514 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028529 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028545 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028560 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028575 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028592 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028607 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028622 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028652 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028671 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028686 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028704 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028720 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028735 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028749 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028765 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028780 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028795 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028835 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028850 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028865 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028881 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028895 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028911 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028927 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028943 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028958 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028973 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028989 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029005 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029021 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029038 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029054 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029071 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029089 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029105 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029121 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029136 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029153 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029168 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029184 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029204 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029221 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029236 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029253 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029269 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029284 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029300 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029315 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029338 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029361 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029384 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029407 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029429 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029451 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029469 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029485 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029501 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029519 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029536 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029552 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029568 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029585 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029602 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029618 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029651 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029669 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029685 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029700 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029716 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029732 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029748 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029767 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029783 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029831 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029849 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029865 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029881 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029897 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029915 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029931 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029948 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029966 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029982 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029999 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.030015 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.030032 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.030049 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.030066 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.030083 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.030137 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.030177 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.030204 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.030227 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.030250 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.030279 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.035860 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028409 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028675 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029092 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029319 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029559 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029837 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.030004 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.030159 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.031809 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 14:39:51.845570733 +0000 UTC Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.032297 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 10:59:16.532166122 +0000 UTC m=+22.405200313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.032327 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.032407 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.032488 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.032894 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.033085 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.033263 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.033362 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.033438 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.033473 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.033670 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.033744 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.034140 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.034301 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.034558 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.034748 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.034798 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.034919 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.035344 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.035453 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.035517 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.036323 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.036500 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.036525 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.036788 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.036799 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.036984 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.037041 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.037253 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.037359 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.037479 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.037645 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.037994 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.038208 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.038141 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.038385 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.038649 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.039046 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.039086 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.039309 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.039594 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.040059 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.040228 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.040380 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.040530 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.040579 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.040697 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.040937 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.040192 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.041364 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.047164 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.047270 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.047330 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.047441 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.047819 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.047925 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.048333 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.048768 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.052624 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.053771 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.053775 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.053833 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.053877 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.057452 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.057650 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.060192 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.060515 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.060756 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.061081 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.061250 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.061289 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.061420 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.061771 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.061875 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.061908 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.062081 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.062157 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.062542 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.062585 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.062991 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.063171 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.063280 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.063384 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.063295 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.063592 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.063728 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.063908 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.064016 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-mkxdt"] Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.064116 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.064163 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.064261 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.064386 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.065088 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.065241 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.065322 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mkxdt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.065321 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.065465 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.065521 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.065617 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.066403 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.066561 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.067627 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.068119 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.068336 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.068351 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.068610 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.068874 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.070001 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.070147 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.070451 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.070765 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.070939 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.071030 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.071204 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.071345 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.071307 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.072094 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.072107 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.072093 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.072404 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.072426 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.072536 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.072578 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.072887 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.072961 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.073091 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.074333 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.074463 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.074569 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.074804 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.075041 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.075058 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.075224 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.075398 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.075405 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.075550 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.075813 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.075722 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.076166 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.076369 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.076592 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.076616 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.077009 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.077131 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.077461 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.077565 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.077834 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.078447 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.078786 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.079024 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.079159 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.079327 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.079689 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.079841 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.080054 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.080065 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.080085 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.080262 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.082236 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.082290 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.082670 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.082772 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.083213 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.084007 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.085614 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.085917 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.086087 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.086219 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.086661 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.086890 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.052786 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.087787 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.087838 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.087868 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.087894 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.087914 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.087937 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.087957 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.087979 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.088005 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.088038 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.088056 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.088073 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.088055 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.088091 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.088181 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.088199 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.088373 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.088573 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.088609 4593 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.088678 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:16.588660977 +0000 UTC m=+22.461695168 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.089777 4593 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.089801 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.090058 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:16.590048525 +0000 UTC m=+22.463082716 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.090076 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.090581 4593 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.092495 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.093455 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.093510 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.093529 4593 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.093542 4593 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.093554 4593 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.093566 4593 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.093579 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.093591 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.093603 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.093614 4593 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.093626 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.108712 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.108909 4593 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.108967 4593 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.109037 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.109096 4593 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.109147 4593 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.109202 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.109258 4593 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.109315 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.109371 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.109424 4593 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.109481 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.109537 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.109593 4593 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.109698 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.109759 4593 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.109812 4593 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.096533 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.109882 4593 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.109994 4593 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110013 4593 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110030 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110044 4593 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110056 4593 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110069 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110081 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110090 4593 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110099 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110107 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110116 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110124 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110139 4593 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110148 4593 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110160 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110171 4593 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110182 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110191 4593 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110202 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110212 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110222 4593 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110233 4593 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110244 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110257 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110269 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110280 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110290 4593 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110304 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110316 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110325 4593 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110334 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110342 4593 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110351 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110359 4593 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110369 4593 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110377 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110387 4593 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110396 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110408 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110420 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110432 4593 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110444 4593 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110456 4593 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110466 4593 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110474 4593 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110482 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110492 4593 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110500 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110508 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110516 4593 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110524 4593 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110532 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110540 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110548 4593 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110556 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110564 4593 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110573 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110581 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110591 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110599 4593 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110608 4593 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110616 4593 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110647 4593 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110656 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110666 4593 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110675 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110683 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110691 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110700 4593 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110708 4593 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110716 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110725 4593 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110733 4593 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110740 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110748 4593 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110756 4593 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110764 4593 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110773 4593 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110783 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110794 4593 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110804 4593 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110814 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110822 4593 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110830 4593 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110837 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110845 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110854 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110863 4593 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110871 4593 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110879 4593 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110888 4593 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110896 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110904 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110912 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110922 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110931 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110940 4593 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110949 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110957 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110965 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110973 4593 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110982 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110990 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110999 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111007 4593 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111015 4593 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111023 4593 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111032 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111040 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111048 4593 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111056 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111063 4593 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111071 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111080 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111088 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111096 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111104 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111112 4593 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111121 4593 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111129 4593 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111137 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111145 4593 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111153 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111161 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111169 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111177 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111186 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111194 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111203 4593 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111211 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111219 4593 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111227 4593 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111236 4593 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111244 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111252 4593 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111261 4593 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111268 4593 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111276 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111284 4593 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111293 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111302 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111311 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.096020 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.096312 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.103724 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.105265 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.107006 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.107033 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.107107 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.107186 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.107957 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.111397 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.111411 4593 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.111481 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:16.611444652 +0000 UTC m=+22.484478843 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.095077 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.101263 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.106823 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.108706 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.112046 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.114495 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.118243 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.119059 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.120687 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.121916 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.123051 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.123139 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.123575 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.123772 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.123796 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.124225 4593 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.125018 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:16.62434123 +0000 UTC m=+22.497375421 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.131891 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.146982 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.147038 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.153029 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.173409 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.207938 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.212538 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213099 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213144 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjtz8\" (UniqueName: \"kubernetes.io/projected/b36fce0b-62b3-4076-a13e-e6048a4d9a4e-kube-api-access-gjtz8\") pod \"node-resolver-mkxdt\" (UID: \"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\") " pod="openshift-dns/node-resolver-mkxdt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213183 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b36fce0b-62b3-4076-a13e-e6048a4d9a4e-hosts-file\") pod \"node-resolver-mkxdt\" (UID: \"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\") " pod="openshift-dns/node-resolver-mkxdt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213261 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213313 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213328 4593 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213341 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213351 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213362 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213374 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213386 4593 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213396 4593 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213407 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213418 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213429 4593 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213440 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213452 4593 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213462 4593 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213473 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213485 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213495 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213505 4593 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213516 4593 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213551 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213779 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.214137 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.227205 4593 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709" exitCode=255 Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.227724 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709"} Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.227778 4593 scope.go:117] "RemoveContainer" containerID="47f750a8d01af88118b5ba0f1743bb4357e5eff487d231fdb6962b1a151d898c" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.240958 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.255571 4593 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.256237 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.282013 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.286388 4593 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-29 10:54:15 +0000 UTC, rotation deadline is 2026-12-12 08:13:55.378279691 +0000 UTC Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.286448 4593 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7605h14m39.091854056s for next certificate rotation Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.305600 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.314582 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.314686 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjtz8\" (UniqueName: \"kubernetes.io/projected/b36fce0b-62b3-4076-a13e-e6048a4d9a4e-kube-api-access-gjtz8\") pod \"node-resolver-mkxdt\" (UID: \"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\") " pod="openshift-dns/node-resolver-mkxdt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.314725 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b36fce0b-62b3-4076-a13e-e6048a4d9a4e-hosts-file\") pod \"node-resolver-mkxdt\" (UID: \"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\") " pod="openshift-dns/node-resolver-mkxdt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.314827 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b36fce0b-62b3-4076-a13e-e6048a4d9a4e-hosts-file\") pod \"node-resolver-mkxdt\" (UID: \"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\") " pod="openshift-dns/node-resolver-mkxdt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.316334 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.323056 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.339036 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.341262 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.346759 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjtz8\" (UniqueName: \"kubernetes.io/projected/b36fce0b-62b3-4076-a13e-e6048a4d9a4e-kube-api-access-gjtz8\") pod \"node-resolver-mkxdt\" (UID: \"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\") " pod="openshift-dns/node-resolver-mkxdt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.353292 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.353745 4593 scope.go:117] "RemoveContainer" containerID="68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709" Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.353913 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.357396 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.376381 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.396338 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.420911 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mkxdt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.422123 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.444669 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.467407 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.616931 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.616994 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.617018 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.617036 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.617095 4593 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.617171 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:17.617155161 +0000 UTC m=+23.490189352 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.617440 4593 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.617485 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:17.61747343 +0000 UTC m=+23.490507621 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.617521 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.617533 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.617543 4593 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.617552 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 10:59:17.617542681 +0000 UTC m=+23.490576872 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.617567 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:17.617560372 +0000 UTC m=+23.490594563 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.718272 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.718403 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.718417 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.718427 4593 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.718463 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:17.718451135 +0000 UTC m=+23.591485326 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.933401 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-xpt4q"] Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.933810 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xpt4q" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.935407 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-zk9np"] Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.935982 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-p4zf2"] Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.936252 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.936599 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.937600 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.937857 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.938774 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.938915 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.939903 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.940499 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.941346 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.942233 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.942424 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.942581 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.942928 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.946262 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.967400 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.978703 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.986942 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.996129 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.003117 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.012764 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.016932 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.022548 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47f750a8d01af88118b5ba0f1743bb4357e5eff487d231fdb6962b1a151d898c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:58:59Z\\\",\\\"message\\\":\\\"W0129 10:58:58.855341 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 10:58:58.855626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769684338 cert, and key in /tmp/serving-cert-1536064180/serving-signer.crt, /tmp/serving-cert-1536064180/serving-signer.key\\\\nI0129 10:58:59.363427 1 observer_polling.go:159] Starting file observer\\\\nW0129 10:58:59.365835 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 10:58:59.366014 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:58:59.368330 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1536064180/tls.crt::/tmp/serving-cert-1536064180/tls.key\\\\\\\"\\\\nF0129 10:58:59.631826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.032038 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.041898 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.055264 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 19:53:54.938803927 +0000 UTC Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.064429 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.073057 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.082875 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.083521 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.084489 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.085276 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.085942 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.086479 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.087150 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.087785 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.088493 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.089178 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.091142 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.091613 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.092989 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.093931 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.099993 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.100734 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.101301 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.102138 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.102528 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.103170 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.103861 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.104373 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.104964 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.105440 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.106136 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.106535 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.107194 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.110300 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.110880 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.111547 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.112539 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.113058 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.113112 4593 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.113485 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.115792 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.116314 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.116802 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.119190 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.119868 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.120729 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.121348 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.122590 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123066 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-os-release\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123109 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123109 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-run-k8s-cni-cncf-io\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123288 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-var-lib-cni-multus\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123310 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-hostroot\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123346 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55q6g\" (UniqueName: \"kubernetes.io/projected/5eed1f11-8e73-4894-965f-a670f6c877b3-kube-api-access-55q6g\") pod \"machine-config-daemon-p4zf2\" (UID: \"5eed1f11-8e73-4894-965f-a670f6c877b3\") " pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123363 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-cnibin\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123382 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-multus-socket-dir-parent\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123422 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-run-netns\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123479 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhqmv\" (UniqueName: \"kubernetes.io/projected/c76afd0b-36c6-4faa-9278-c08c60c483e9-kube-api-access-mhqmv\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123541 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-etc-kubernetes\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123564 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r7p5\" (UniqueName: \"kubernetes.io/projected/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-kube-api-access-8r7p5\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123583 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-multus-cni-dir\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123606 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-os-release\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123645 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123703 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123781 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5eed1f11-8e73-4894-965f-a670f6c877b3-rootfs\") pod \"machine-config-daemon-p4zf2\" (UID: \"5eed1f11-8e73-4894-965f-a670f6c877b3\") " pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123827 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-cnibin\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123857 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-var-lib-cni-bin\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123872 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-multus-conf-dir\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123886 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c76afd0b-36c6-4faa-9278-c08c60c483e9-multus-daemon-config\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123903 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c76afd0b-36c6-4faa-9278-c08c60c483e9-cni-binary-copy\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123921 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-var-lib-kubelet\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123935 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-run-multus-certs\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123965 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5eed1f11-8e73-4894-965f-a670f6c877b3-mcd-auth-proxy-config\") pod \"machine-config-daemon-p4zf2\" (UID: \"5eed1f11-8e73-4894-965f-a670f6c877b3\") " pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123984 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-system-cni-dir\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.124002 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5eed1f11-8e73-4894-965f-a670f6c877b3-proxy-tls\") pod \"machine-config-daemon-p4zf2\" (UID: \"5eed1f11-8e73-4894-965f-a670f6c877b3\") " pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.124020 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-cni-binary-copy\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.124035 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-system-cni-dir\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.124485 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.125593 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.126559 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.127005 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.127927 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.128391 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.130029 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.130507 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.130696 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.131504 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.132149 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.132962 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.133623 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.134776 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.138834 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.147472 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.158460 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.170091 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47f750a8d01af88118b5ba0f1743bb4357e5eff487d231fdb6962b1a151d898c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:58:59Z\\\",\\\"message\\\":\\\"W0129 10:58:58.855341 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 10:58:58.855626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769684338 cert, and key in /tmp/serving-cert-1536064180/serving-signer.crt, /tmp/serving-cert-1536064180/serving-signer.key\\\\nI0129 10:58:59.363427 1 observer_polling.go:159] Starting file observer\\\\nW0129 10:58:59.365835 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 10:58:59.366014 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:58:59.368330 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1536064180/tls.crt::/tmp/serving-cert-1536064180/tls.key\\\\\\\"\\\\nF0129 10:58:59.631826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.178387 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.185270 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.195317 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.204056 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225482 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55q6g\" (UniqueName: \"kubernetes.io/projected/5eed1f11-8e73-4894-965f-a670f6c877b3-kube-api-access-55q6g\") pod \"machine-config-daemon-p4zf2\" (UID: \"5eed1f11-8e73-4894-965f-a670f6c877b3\") " pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225527 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-cnibin\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225550 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-multus-socket-dir-parent\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225579 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-run-netns\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225600 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhqmv\" (UniqueName: \"kubernetes.io/projected/c76afd0b-36c6-4faa-9278-c08c60c483e9-kube-api-access-mhqmv\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225650 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-cnibin\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225663 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-etc-kubernetes\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225695 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r7p5\" (UniqueName: \"kubernetes.io/projected/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-kube-api-access-8r7p5\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225699 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-etc-kubernetes\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225713 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-multus-cni-dir\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225728 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225743 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225742 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-run-netns\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225758 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-os-release\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225791 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-multus-socket-dir-parent\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225828 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5eed1f11-8e73-4894-965f-a670f6c877b3-rootfs\") pod \"machine-config-daemon-p4zf2\" (UID: \"5eed1f11-8e73-4894-965f-a670f6c877b3\") " pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225850 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-os-release\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225872 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-cnibin\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225874 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5eed1f11-8e73-4894-965f-a670f6c877b3-rootfs\") pod \"machine-config-daemon-p4zf2\" (UID: \"5eed1f11-8e73-4894-965f-a670f6c877b3\") " pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225850 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-cnibin\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225940 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-multus-cni-dir\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225979 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c76afd0b-36c6-4faa-9278-c08c60c483e9-multus-daemon-config\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226045 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-var-lib-cni-bin\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226077 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-multus-conf-dir\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226092 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c76afd0b-36c6-4faa-9278-c08c60c483e9-cni-binary-copy\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226106 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-var-lib-kubelet\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226121 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-run-multus-certs\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226138 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5eed1f11-8e73-4894-965f-a670f6c877b3-mcd-auth-proxy-config\") pod \"machine-config-daemon-p4zf2\" (UID: \"5eed1f11-8e73-4894-965f-a670f6c877b3\") " pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226151 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-system-cni-dir\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226167 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-system-cni-dir\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226183 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5eed1f11-8e73-4894-965f-a670f6c877b3-proxy-tls\") pod \"machine-config-daemon-p4zf2\" (UID: \"5eed1f11-8e73-4894-965f-a670f6c877b3\") " pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226198 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-cni-binary-copy\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226213 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-var-lib-cni-multus\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226227 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-hostroot\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226241 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-os-release\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226255 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-run-k8s-cni-cncf-io\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226264 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226295 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-var-lib-cni-bin\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226298 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-run-k8s-cni-cncf-io\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226319 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-multus-conf-dir\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226343 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-system-cni-dir\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226606 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226676 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-run-multus-certs\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226711 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-var-lib-kubelet\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226738 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c76afd0b-36c6-4faa-9278-c08c60c483e9-multus-daemon-config\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226742 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-var-lib-cni-multus\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226892 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c76afd0b-36c6-4faa-9278-c08c60c483e9-cni-binary-copy\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226935 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-system-cni-dir\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226970 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-hostroot\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.227013 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-os-release\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.227257 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-cni-binary-copy\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.227495 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5eed1f11-8e73-4894-965f-a670f6c877b3-mcd-auth-proxy-config\") pod \"machine-config-daemon-p4zf2\" (UID: \"5eed1f11-8e73-4894-965f-a670f6c877b3\") " pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.230930 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5eed1f11-8e73-4894-965f-a670f6c877b3-proxy-tls\") pod \"machine-config-daemon-p4zf2\" (UID: \"5eed1f11-8e73-4894-965f-a670f6c877b3\") " pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.232433 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.236047 4593 scope.go:117] "RemoveContainer" containerID="68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709" Jan 29 10:59:17 crc kubenswrapper[4593]: E0129 10:59:17.236218 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.236585 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1"} Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.236648 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7"} Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.236665 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"733b54284c53dba7cd23ad45db0c26275c95ac566949f4efed0456268a8a20c2"} Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.238943 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mkxdt" event={"ID":"b36fce0b-62b3-4076-a13e-e6048a4d9a4e","Type":"ContainerStarted","Data":"0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a"} Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.238969 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mkxdt" event={"ID":"b36fce0b-62b3-4076-a13e-e6048a4d9a4e","Type":"ContainerStarted","Data":"10265cd6a588580a14d990e741ef622df68d39b013bae419362fb8669801ea24"} Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.240339 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"69f261cd01b221f59b9f0148d4f97e91703379b517b24361eae47b76c3f6abd4"} Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.241800 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a"} Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.241841 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ef51da5632e392d63a93a615ba597a7b97d242895b667eea43a587c69774adb4"} Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.246501 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhqmv\" (UniqueName: \"kubernetes.io/projected/c76afd0b-36c6-4faa-9278-c08c60c483e9-kube-api-access-mhqmv\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.247854 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.248131 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.250423 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55q6g\" (UniqueName: \"kubernetes.io/projected/5eed1f11-8e73-4894-965f-a670f6c877b3-kube-api-access-55q6g\") pod \"machine-config-daemon-p4zf2\" (UID: \"5eed1f11-8e73-4894-965f-a670f6c877b3\") " pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.250469 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r7p5\" (UniqueName: \"kubernetes.io/projected/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-kube-api-access-8r7p5\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.254233 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.258898 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.261790 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: W0129 10:59:17.262428 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc76afd0b_36c6_4faa_9278_c08c60c483e9.slice/crio-1504d83bba4a32e82f9d5d28f49062cf7fa579696bbc14a30b8df9d8cecd92bf WatchSource:0}: Error finding container 1504d83bba4a32e82f9d5d28f49062cf7fa579696bbc14a30b8df9d8cecd92bf: Status 404 returned error can't find the container with id 1504d83bba4a32e82f9d5d28f49062cf7fa579696bbc14a30b8df9d8cecd92bf Jan 29 10:59:17 crc kubenswrapper[4593]: W0129 10:59:17.288740 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bf08558_eb2b_4c00_8494_6f9691a7e3b6.slice/crio-49093df79a552ddc90e1fcfbbd12c91c1d57d09ae6494083e3e492caa6cbb919 WatchSource:0}: Error finding container 49093df79a552ddc90e1fcfbbd12c91c1d57d09ae6494083e3e492caa6cbb919: Status 404 returned error can't find the container with id 49093df79a552ddc90e1fcfbbd12c91c1d57d09ae6494083e3e492caa6cbb919 Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.288923 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.304987 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.312966 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vmt7l"] Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.313865 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.324471 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.324780 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.324814 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.324931 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.324989 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.325126 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.325240 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.328181 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.350187 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.373260 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.390324 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.403678 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.414142 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.425565 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429379 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-slash\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429408 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfpld\" (UniqueName: \"kubernetes.io/projected/943b00a1-4aae-4054-b4fd-dc512fe58270-kube-api-access-jfpld\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429451 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-ovn\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429465 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-cni-netd\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429491 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-ovnkube-config\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429506 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/943b00a1-4aae-4054-b4fd-dc512fe58270-ovn-node-metrics-cert\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429527 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-systemd\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429541 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-run-ovn-kubernetes\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429589 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-run-netns\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429604 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-env-overrides\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429646 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-systemd-units\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429661 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-etc-openvswitch\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429676 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-log-socket\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429691 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429706 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-ovnkube-script-lib\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429728 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-var-lib-openvswitch\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429745 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-node-log\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429760 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-kubelet\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429773 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-openvswitch\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429788 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-cni-bin\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.436394 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.451847 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.463252 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.480912 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.493830 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.510802 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.520669 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.530184 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-ovn\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.530344 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-ovn\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.530440 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-cni-netd\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.530726 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-ovnkube-config\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.530812 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/943b00a1-4aae-4054-b4fd-dc512fe58270-ovn-node-metrics-cert\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.530888 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-systemd\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.530975 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-run-ovn-kubernetes\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.531073 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-run-netns\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.531161 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-env-overrides\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.531247 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-systemd-units\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.531332 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-etc-openvswitch\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.531403 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-log-socket\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.531485 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.531565 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-ovnkube-script-lib\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.531650 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-systemd-units\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.531690 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-etc-openvswitch\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.531496 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-ovnkube-config\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.531724 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.531757 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-systemd\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.531769 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-run-ovn-kubernetes\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.531778 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-run-netns\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.531615 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-log-socket\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.531862 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-env-overrides\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.532107 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-var-lib-openvswitch\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.532197 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-node-log\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.532296 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-kubelet\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.532366 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-kubelet\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.532235 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-ovnkube-script-lib\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.532340 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-var-lib-openvswitch\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.532528 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-openvswitch\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.532616 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-cni-bin\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.532733 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-slash\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.532818 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfpld\" (UniqueName: \"kubernetes.io/projected/943b00a1-4aae-4054-b4fd-dc512fe58270-kube-api-access-jfpld\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.532319 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-node-log\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.533208 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-openvswitch\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.533319 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-cni-bin\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.533411 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-slash\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.533512 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-cni-netd\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.535520 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/943b00a1-4aae-4054-b4fd-dc512fe58270-ovn-node-metrics-cert\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.556475 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfpld\" (UniqueName: \"kubernetes.io/projected/943b00a1-4aae-4054-b4fd-dc512fe58270-kube-api-access-jfpld\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.598514 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.618986 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.631448 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.633811 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.633872 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:17 crc kubenswrapper[4593]: E0129 10:59:17.633942 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 10:59:19.633914204 +0000 UTC m=+25.506948435 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 10:59:17 crc kubenswrapper[4593]: E0129 10:59:17.633955 4593 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.634000 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.634024 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:17 crc kubenswrapper[4593]: E0129 10:59:17.634148 4593 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 10:59:17 crc kubenswrapper[4593]: E0129 10:59:17.634179 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:19.634172401 +0000 UTC m=+25.507206592 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 10:59:17 crc kubenswrapper[4593]: E0129 10:59:17.634196 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:19.634190071 +0000 UTC m=+25.507224262 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 10:59:17 crc kubenswrapper[4593]: E0129 10:59:17.634230 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 10:59:17 crc kubenswrapper[4593]: E0129 10:59:17.634243 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 10:59:17 crc kubenswrapper[4593]: E0129 10:59:17.634253 4593 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:17 crc kubenswrapper[4593]: E0129 10:59:17.634284 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:19.634273194 +0000 UTC m=+25.507307385 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.644432 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.649582 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.657666 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: W0129 10:59:17.660323 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod943b00a1_4aae_4054_b4fd_dc512fe58270.slice/crio-1f4d4677f9da87318adb658a3d5c60bf8ae9dd156ada23706892dfb2a3940ad7 WatchSource:0}: Error finding container 1f4d4677f9da87318adb658a3d5c60bf8ae9dd156ada23706892dfb2a3940ad7: Status 404 returned error can't find the container with id 1f4d4677f9da87318adb658a3d5c60bf8ae9dd156ada23706892dfb2a3940ad7 Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.676374 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.698974 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.734691 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:17 crc kubenswrapper[4593]: E0129 10:59:17.734812 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 10:59:17 crc kubenswrapper[4593]: E0129 10:59:17.734831 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 10:59:17 crc kubenswrapper[4593]: E0129 10:59:17.734857 4593 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:17 crc kubenswrapper[4593]: E0129 10:59:17.734907 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:19.734891309 +0000 UTC m=+25.607925500 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.055998 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 13:06:00.144772909 +0000 UTC Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.074429 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.074474 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.074435 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:18 crc kubenswrapper[4593]: E0129 10:59:18.074567 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:18 crc kubenswrapper[4593]: E0129 10:59:18.075399 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:18 crc kubenswrapper[4593]: E0129 10:59:18.075612 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.245917 4593 generic.go:334] "Generic (PLEG): container finished" podID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerID="a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9" exitCode=0 Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.246039 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerDied","Data":"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9"} Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.246101 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerStarted","Data":"1f4d4677f9da87318adb658a3d5c60bf8ae9dd156ada23706892dfb2a3940ad7"} Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.247378 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xpt4q" event={"ID":"c76afd0b-36c6-4faa-9278-c08c60c483e9","Type":"ContainerStarted","Data":"c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08"} Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.247412 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xpt4q" event={"ID":"c76afd0b-36c6-4faa-9278-c08c60c483e9","Type":"ContainerStarted","Data":"1504d83bba4a32e82f9d5d28f49062cf7fa579696bbc14a30b8df9d8cecd92bf"} Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.249068 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0"} Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.251110 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6"} Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.251142 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a"} Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.251153 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"2cb67a7dc3348ff0e620365865ac008e4766d68d233d0f9b6ae4fe16981dda04"} Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.254026 4593 generic.go:334] "Generic (PLEG): container finished" podID="1bf08558-eb2b-4c00-8494-6f9691a7e3b6" containerID="e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8" exitCode=0 Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.254080 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" event={"ID":"1bf08558-eb2b-4c00-8494-6f9691a7e3b6","Type":"ContainerDied","Data":"e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8"} Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.254146 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" event={"ID":"1bf08558-eb2b-4c00-8494-6f9691a7e3b6","Type":"ContainerStarted","Data":"49093df79a552ddc90e1fcfbbd12c91c1d57d09ae6494083e3e492caa6cbb919"} Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.254475 4593 scope.go:117] "RemoveContainer" containerID="68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709" Jan 29 10:59:18 crc kubenswrapper[4593]: E0129 10:59:18.254602 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.286757 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.309735 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.323582 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.334423 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.346251 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.359229 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.378540 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.391954 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.406025 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.421999 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.437930 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.447531 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.459449 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.471999 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.481550 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.492501 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.504570 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.516581 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.533060 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.548437 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.566323 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.578733 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.592201 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.604203 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.622500 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.642528 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.056924 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 14:38:53.855874076 +0000 UTC Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.146921 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-42qv9"] Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.147293 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-42qv9" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.149089 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.150264 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.150320 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.150389 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.161218 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.175183 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.192905 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.202873 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.214696 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.229259 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.247049 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.250146 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kd2v\" (UniqueName: \"kubernetes.io/projected/bae5deb1-f488-4080-8a68-215c491015f7-kube-api-access-2kd2v\") pod \"node-ca-42qv9\" (UID: \"bae5deb1-f488-4080-8a68-215c491015f7\") " pod="openshift-image-registry/node-ca-42qv9" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.250197 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bae5deb1-f488-4080-8a68-215c491015f7-host\") pod \"node-ca-42qv9\" (UID: \"bae5deb1-f488-4080-8a68-215c491015f7\") " pod="openshift-image-registry/node-ca-42qv9" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.250222 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bae5deb1-f488-4080-8a68-215c491015f7-serviceca\") pod \"node-ca-42qv9\" (UID: \"bae5deb1-f488-4080-8a68-215c491015f7\") " pod="openshift-image-registry/node-ca-42qv9" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.258245 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerStarted","Data":"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990"} Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.258291 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerStarted","Data":"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c"} Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.260231 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" event={"ID":"1bf08558-eb2b-4c00-8494-6f9691a7e3b6","Type":"ContainerStarted","Data":"bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27"} Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.265458 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.283137 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.297496 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.308132 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.319776 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.338519 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.350030 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.351429 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kd2v\" (UniqueName: \"kubernetes.io/projected/bae5deb1-f488-4080-8a68-215c491015f7-kube-api-access-2kd2v\") pod \"node-ca-42qv9\" (UID: \"bae5deb1-f488-4080-8a68-215c491015f7\") " pod="openshift-image-registry/node-ca-42qv9" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.351535 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bae5deb1-f488-4080-8a68-215c491015f7-host\") pod \"node-ca-42qv9\" (UID: \"bae5deb1-f488-4080-8a68-215c491015f7\") " pod="openshift-image-registry/node-ca-42qv9" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.351561 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bae5deb1-f488-4080-8a68-215c491015f7-serviceca\") pod \"node-ca-42qv9\" (UID: \"bae5deb1-f488-4080-8a68-215c491015f7\") " pod="openshift-image-registry/node-ca-42qv9" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.352141 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bae5deb1-f488-4080-8a68-215c491015f7-host\") pod \"node-ca-42qv9\" (UID: \"bae5deb1-f488-4080-8a68-215c491015f7\") " pod="openshift-image-registry/node-ca-42qv9" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.352806 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bae5deb1-f488-4080-8a68-215c491015f7-serviceca\") pod \"node-ca-42qv9\" (UID: \"bae5deb1-f488-4080-8a68-215c491015f7\") " pod="openshift-image-registry/node-ca-42qv9" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.362983 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.370532 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kd2v\" (UniqueName: \"kubernetes.io/projected/bae5deb1-f488-4080-8a68-215c491015f7-kube-api-access-2kd2v\") pod \"node-ca-42qv9\" (UID: \"bae5deb1-f488-4080-8a68-215c491015f7\") " pod="openshift-image-registry/node-ca-42qv9" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.374387 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.391833 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.428466 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.467251 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.510115 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-42qv9" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.515190 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: W0129 10:59:19.531522 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbae5deb1_f488_4080_8a68_215c491015f7.slice/crio-b9f91f8bb5cb6dc2f93fe94eb835048ca35d34b9901012f2506c8acac05d88b7 WatchSource:0}: Error finding container b9f91f8bb5cb6dc2f93fe94eb835048ca35d34b9901012f2506c8acac05d88b7: Status 404 returned error can't find the container with id b9f91f8bb5cb6dc2f93fe94eb835048ca35d34b9901012f2506c8acac05d88b7 Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.551196 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.593500 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.634231 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.656213 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 10:59:19 crc kubenswrapper[4593]: E0129 10:59:19.656351 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 10:59:23.656329841 +0000 UTC m=+29.529364032 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.656933 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.656967 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.656988 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:19 crc kubenswrapper[4593]: E0129 10:59:19.657089 4593 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 10:59:19 crc kubenswrapper[4593]: E0129 10:59:19.657130 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:23.657121951 +0000 UTC m=+29.530156132 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 10:59:19 crc kubenswrapper[4593]: E0129 10:59:19.657337 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 10:59:19 crc kubenswrapper[4593]: E0129 10:59:19.657357 4593 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 10:59:19 crc kubenswrapper[4593]: E0129 10:59:19.657362 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 10:59:19 crc kubenswrapper[4593]: E0129 10:59:19.657386 4593 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:19 crc kubenswrapper[4593]: E0129 10:59:19.657401 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:23.657390368 +0000 UTC m=+29.530424569 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 10:59:19 crc kubenswrapper[4593]: E0129 10:59:19.657421 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:23.65741165 +0000 UTC m=+29.530445841 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.672221 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.709884 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.755360 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.758005 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:19 crc kubenswrapper[4593]: E0129 10:59:19.758190 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 10:59:19 crc kubenswrapper[4593]: E0129 10:59:19.758232 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 10:59:19 crc kubenswrapper[4593]: E0129 10:59:19.758246 4593 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:19 crc kubenswrapper[4593]: E0129 10:59:19.758309 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:23.758291013 +0000 UTC m=+29.631325264 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.797895 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.837363 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.058019 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 11:45:34.69138042 +0000 UTC Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.074111 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.074166 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:20 crc kubenswrapper[4593]: E0129 10:59:20.074256 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:20 crc kubenswrapper[4593]: E0129 10:59:20.074388 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.074491 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:20 crc kubenswrapper[4593]: E0129 10:59:20.074678 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.266358 4593 generic.go:334] "Generic (PLEG): container finished" podID="1bf08558-eb2b-4c00-8494-6f9691a7e3b6" containerID="bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27" exitCode=0 Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.266615 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" event={"ID":"1bf08558-eb2b-4c00-8494-6f9691a7e3b6","Type":"ContainerDied","Data":"bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27"} Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.268091 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-42qv9" event={"ID":"bae5deb1-f488-4080-8a68-215c491015f7","Type":"ContainerStarted","Data":"b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae"} Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.268113 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-42qv9" event={"ID":"bae5deb1-f488-4080-8a68-215c491015f7","Type":"ContainerStarted","Data":"b9f91f8bb5cb6dc2f93fe94eb835048ca35d34b9901012f2506c8acac05d88b7"} Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.271459 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerStarted","Data":"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9"} Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.271496 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerStarted","Data":"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a"} Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.271505 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerStarted","Data":"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8"} Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.271515 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerStarted","Data":"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539"} Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.284620 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.296719 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.308141 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.319042 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.330342 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.344555 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.357298 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.368536 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.384804 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.394858 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.405894 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.424285 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.439162 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.453037 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.467304 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.480590 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.517183 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.548779 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.590427 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.631001 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.670217 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.708282 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.747968 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.790884 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.829450 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.872108 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.912535 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.948406 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.059325 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 18:36:16.246493116 +0000 UTC Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.276223 4593 generic.go:334] "Generic (PLEG): container finished" podID="1bf08558-eb2b-4c00-8494-6f9691a7e3b6" containerID="56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f" exitCode=0 Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.276265 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" event={"ID":"1bf08558-eb2b-4c00-8494-6f9691a7e3b6","Type":"ContainerDied","Data":"56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f"} Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.295283 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.320787 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.343714 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.356420 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.367130 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.378275 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.388200 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.397824 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.408815 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.420603 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.440027 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.450378 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.469832 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.510487 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.524664 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.526715 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.526746 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.526757 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.526856 4593 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.541189 4593 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.541456 4593 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.542695 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.542733 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.542744 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.542760 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.542772 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:21Z","lastTransitionTime":"2026-01-29T10:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:21 crc kubenswrapper[4593]: E0129 10:59:21.560386 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.563174 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.563198 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.563206 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.563218 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.563228 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:21Z","lastTransitionTime":"2026-01-29T10:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:21 crc kubenswrapper[4593]: E0129 10:59:21.575788 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.578744 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.578775 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.578784 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.578798 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.578808 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:21Z","lastTransitionTime":"2026-01-29T10:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:21 crc kubenswrapper[4593]: E0129 10:59:21.589897 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.593843 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.593876 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.593885 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.593898 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.593907 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:21Z","lastTransitionTime":"2026-01-29T10:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:21 crc kubenswrapper[4593]: E0129 10:59:21.604723 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.607510 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.607549 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.607559 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.607575 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.607585 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:21Z","lastTransitionTime":"2026-01-29T10:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:21 crc kubenswrapper[4593]: E0129 10:59:21.617628 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: E0129 10:59:21.617771 4593 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.618939 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.618961 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.618971 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.618983 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.618994 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:21Z","lastTransitionTime":"2026-01-29T10:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.721693 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.721787 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.721805 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.721867 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.721886 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:21Z","lastTransitionTime":"2026-01-29T10:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.825250 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.825291 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.825303 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.825320 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.825332 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:21Z","lastTransitionTime":"2026-01-29T10:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.927309 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.927352 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.927366 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.927382 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.927393 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:21Z","lastTransitionTime":"2026-01-29T10:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.029813 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.029861 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.029873 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.029891 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.029907 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:22Z","lastTransitionTime":"2026-01-29T10:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.060484 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 02:25:23.331979727 +0000 UTC Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.073956 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.074015 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:22 crc kubenswrapper[4593]: E0129 10:59:22.074082 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.074026 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:22 crc kubenswrapper[4593]: E0129 10:59:22.074155 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:22 crc kubenswrapper[4593]: E0129 10:59:22.074252 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.133185 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.133215 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.133226 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.133240 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.133252 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:22Z","lastTransitionTime":"2026-01-29T10:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.236659 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.236722 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.236741 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.236762 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.236778 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:22Z","lastTransitionTime":"2026-01-29T10:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.282504 4593 generic.go:334] "Generic (PLEG): container finished" podID="1bf08558-eb2b-4c00-8494-6f9691a7e3b6" containerID="5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7" exitCode=0 Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.282571 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" event={"ID":"1bf08558-eb2b-4c00-8494-6f9691a7e3b6","Type":"ContainerDied","Data":"5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7"} Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.287862 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerStarted","Data":"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da"} Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.306404 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:22Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.324604 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:22Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.338546 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.338575 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.338584 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.338596 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.338604 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:22Z","lastTransitionTime":"2026-01-29T10:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.344873 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:22Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.356450 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:22Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.367037 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:22Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.377916 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:22Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.388578 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:22Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.400008 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:22Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.412148 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:22Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.422804 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:22Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.438020 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:22Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.443197 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.443265 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.443274 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.443289 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.443298 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:22Z","lastTransitionTime":"2026-01-29T10:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.449719 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:22Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.462085 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:22Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.479311 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:22Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.545825 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.545856 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.545864 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.545880 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.545889 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:22Z","lastTransitionTime":"2026-01-29T10:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.648407 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.648453 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.648468 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.648489 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.648505 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:22Z","lastTransitionTime":"2026-01-29T10:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.751783 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.751828 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.751840 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.751857 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.751870 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:22Z","lastTransitionTime":"2026-01-29T10:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.854517 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.854562 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.854572 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.854587 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.854597 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:22Z","lastTransitionTime":"2026-01-29T10:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.956876 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.956914 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.956927 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.956944 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.956956 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:22Z","lastTransitionTime":"2026-01-29T10:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.059375 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.059408 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.059420 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.059434 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.059444 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:23Z","lastTransitionTime":"2026-01-29T10:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.060668 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 06:54:49.501933149 +0000 UTC Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.162340 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.162463 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.162477 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.162495 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.162505 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:23Z","lastTransitionTime":"2026-01-29T10:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.264479 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.264529 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.264540 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.264556 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.264568 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:23Z","lastTransitionTime":"2026-01-29T10:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.293486 4593 generic.go:334] "Generic (PLEG): container finished" podID="1bf08558-eb2b-4c00-8494-6f9691a7e3b6" containerID="4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d" exitCode=0 Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.293522 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" event={"ID":"1bf08558-eb2b-4c00-8494-6f9691a7e3b6","Type":"ContainerDied","Data":"4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d"} Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.307061 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:23Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.318797 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:23Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.330451 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:23Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.341555 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:23Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.354940 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:23Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.367022 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:23Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.368859 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.368914 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.368931 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.368946 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.368955 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:23Z","lastTransitionTime":"2026-01-29T10:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.378838 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:23Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.393303 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:23Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.412338 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:23Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.427868 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:23Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.439740 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:23Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.451835 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:23Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.462167 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:23Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.470850 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.470884 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.470894 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.470918 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.470929 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:23Z","lastTransitionTime":"2026-01-29T10:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.474792 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:23Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.573620 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.573681 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.573689 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.573705 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.573716 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:23Z","lastTransitionTime":"2026-01-29T10:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.676392 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.676426 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.676435 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.676449 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.676458 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:23Z","lastTransitionTime":"2026-01-29T10:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.690973 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.691067 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:23 crc kubenswrapper[4593]: E0129 10:59:23.691115 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 10:59:31.69108688 +0000 UTC m=+37.564121091 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 10:59:23 crc kubenswrapper[4593]: E0129 10:59:23.691155 4593 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.691176 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:23 crc kubenswrapper[4593]: E0129 10:59:23.691197 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:31.691184933 +0000 UTC m=+37.564219124 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.691225 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:23 crc kubenswrapper[4593]: E0129 10:59:23.691315 4593 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 10:59:23 crc kubenswrapper[4593]: E0129 10:59:23.691388 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:31.691372409 +0000 UTC m=+37.564406600 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 10:59:23 crc kubenswrapper[4593]: E0129 10:59:23.691388 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 10:59:23 crc kubenswrapper[4593]: E0129 10:59:23.691409 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 10:59:23 crc kubenswrapper[4593]: E0129 10:59:23.691420 4593 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:23 crc kubenswrapper[4593]: E0129 10:59:23.691463 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:31.691453981 +0000 UTC m=+37.564488252 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.778728 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.778768 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.778779 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.778795 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.778806 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:23Z","lastTransitionTime":"2026-01-29T10:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.792610 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:23 crc kubenswrapper[4593]: E0129 10:59:23.792801 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 10:59:23 crc kubenswrapper[4593]: E0129 10:59:23.792833 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 10:59:23 crc kubenswrapper[4593]: E0129 10:59:23.792845 4593 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:23 crc kubenswrapper[4593]: E0129 10:59:23.792913 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:31.792882638 +0000 UTC m=+37.665916829 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.881234 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.881267 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.881275 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.881287 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.881295 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:23Z","lastTransitionTime":"2026-01-29T10:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.983090 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.983159 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.983170 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.983185 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.983194 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:23Z","lastTransitionTime":"2026-01-29T10:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.061740 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 09:09:36.832483787 +0000 UTC Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.074018 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.074048 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.074018 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:24 crc kubenswrapper[4593]: E0129 10:59:24.074129 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:24 crc kubenswrapper[4593]: E0129 10:59:24.074195 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:24 crc kubenswrapper[4593]: E0129 10:59:24.074265 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.088553 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.088597 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.088680 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.088715 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.088730 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:24Z","lastTransitionTime":"2026-01-29T10:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.191728 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.191772 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.191784 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.191802 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.191815 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:24Z","lastTransitionTime":"2026-01-29T10:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.293686 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.293728 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.293738 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.293753 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.293762 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:24Z","lastTransitionTime":"2026-01-29T10:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.299114 4593 generic.go:334] "Generic (PLEG): container finished" podID="1bf08558-eb2b-4c00-8494-6f9691a7e3b6" containerID="2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022" exitCode=0 Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.299163 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" event={"ID":"1bf08558-eb2b-4c00-8494-6f9691a7e3b6","Type":"ContainerDied","Data":"2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022"} Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.319561 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:24Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.333593 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:24Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.344179 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:24Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.365399 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:24Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.374911 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:24Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.392500 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:24Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.396459 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.396496 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.396505 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.396523 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.396535 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:24Z","lastTransitionTime":"2026-01-29T10:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.412174 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:24Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.425076 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:24Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.441771 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:24Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.451455 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:24Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.463100 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:24Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.473622 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:24Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.483803 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:24Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.493479 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:24Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.499108 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.499134 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.499141 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.499171 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.499181 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:24Z","lastTransitionTime":"2026-01-29T10:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.601910 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.601943 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.601951 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.601964 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.601973 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:24Z","lastTransitionTime":"2026-01-29T10:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.655456 4593 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.705227 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.705274 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.705285 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.705303 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.705317 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:24Z","lastTransitionTime":"2026-01-29T10:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.808167 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.808219 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.808233 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.808251 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.808263 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:24Z","lastTransitionTime":"2026-01-29T10:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.911088 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.911120 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.911129 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.911143 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.911152 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:24Z","lastTransitionTime":"2026-01-29T10:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.013500 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.013604 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.013684 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.013713 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.013734 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:25Z","lastTransitionTime":"2026-01-29T10:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.062935 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 13:26:00.731915272 +0000 UTC Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.087413 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.101048 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.115022 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.116204 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.116303 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.116371 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.116459 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.116533 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:25Z","lastTransitionTime":"2026-01-29T10:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.136026 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.150482 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.168096 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.184317 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.196031 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.207244 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.218848 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.219071 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.219137 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.219205 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.219259 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:25Z","lastTransitionTime":"2026-01-29T10:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.221695 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.236313 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.246345 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.256802 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.265099 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.304672 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" event={"ID":"1bf08558-eb2b-4c00-8494-6f9691a7e3b6","Type":"ContainerStarted","Data":"49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3"} Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.309463 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerStarted","Data":"da8e34eb56377e17ccba577d6fb9126cfb4d73d1e821ea70de3089d83bbbb8c0"} Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.309862 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.309913 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.310078 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.317259 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.325038 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.325359 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.325449 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.325557 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.325700 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:25Z","lastTransitionTime":"2026-01-29T10:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.330815 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.340288 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.351101 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.356482 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.356686 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.362440 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.373964 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.388175 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.397541 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.407596 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.415615 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.428024 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.428202 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.428217 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.428225 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.428238 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.428246 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:25Z","lastTransitionTime":"2026-01-29T10:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.444695 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.455666 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.468113 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.478606 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.487483 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.496821 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.506164 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.517319 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.528440 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.530362 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.530389 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.530398 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.530412 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.530421 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:25Z","lastTransitionTime":"2026-01-29T10:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.540858 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.548971 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.559716 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.568365 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.578749 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.588378 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.602581 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.622916 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8e34eb56377e17ccba577d6fb9126cfb4d73d1e821ea70de3089d83bbbb8c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.633067 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.633252 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.633339 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.633418 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.633532 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:25Z","lastTransitionTime":"2026-01-29T10:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.737219 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.737256 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.737267 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.737284 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.737295 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:25Z","lastTransitionTime":"2026-01-29T10:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.839402 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.839440 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.839449 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.839463 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.839472 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:25Z","lastTransitionTime":"2026-01-29T10:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.942430 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.942709 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.942823 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.942911 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.943001 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:25Z","lastTransitionTime":"2026-01-29T10:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.045746 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.045785 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.045795 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.045810 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.045820 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:26Z","lastTransitionTime":"2026-01-29T10:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.063957 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 01:22:43.331688545 +0000 UTC Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.074310 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.074352 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.074456 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:26 crc kubenswrapper[4593]: E0129 10:59:26.074450 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:26 crc kubenswrapper[4593]: E0129 10:59:26.074572 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:26 crc kubenswrapper[4593]: E0129 10:59:26.074742 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.147523 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.147582 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.147594 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.147609 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.147620 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:26Z","lastTransitionTime":"2026-01-29T10:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.249322 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.249352 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.249363 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.249385 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.249397 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:26Z","lastTransitionTime":"2026-01-29T10:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.351960 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.351991 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.352001 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.352018 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.352030 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:26Z","lastTransitionTime":"2026-01-29T10:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.454424 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.454456 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.454465 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.454479 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.454489 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:26Z","lastTransitionTime":"2026-01-29T10:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.556667 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.556701 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.556712 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.556730 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.556740 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:26Z","lastTransitionTime":"2026-01-29T10:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.659210 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.659255 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.659268 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.659285 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.659296 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:26Z","lastTransitionTime":"2026-01-29T10:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.762216 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.762253 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.762263 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.762279 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.762288 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:26Z","lastTransitionTime":"2026-01-29T10:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.864453 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.864500 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.864512 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.864529 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.864560 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:26Z","lastTransitionTime":"2026-01-29T10:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.966819 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.966886 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.966897 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.966911 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.966944 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:26Z","lastTransitionTime":"2026-01-29T10:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.065141 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 02:39:36.181342595 +0000 UTC Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.068961 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.069045 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.069078 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.069109 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.069130 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:27Z","lastTransitionTime":"2026-01-29T10:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.171086 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.171781 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.171797 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.171834 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.171845 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:27Z","lastTransitionTime":"2026-01-29T10:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.274259 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.274296 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.274304 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.274318 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.274329 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:27Z","lastTransitionTime":"2026-01-29T10:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.376613 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.376668 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.376678 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.376693 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.376703 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:27Z","lastTransitionTime":"2026-01-29T10:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.478971 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.479003 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.479013 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.479026 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.479037 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:27Z","lastTransitionTime":"2026-01-29T10:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.581487 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.581533 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.581542 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.581556 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.581565 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:27Z","lastTransitionTime":"2026-01-29T10:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.683403 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.683437 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.683447 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.683462 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.683470 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:27Z","lastTransitionTime":"2026-01-29T10:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.786170 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.786216 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.786229 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.786245 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.786256 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:27Z","lastTransitionTime":"2026-01-29T10:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.888812 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.888851 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.888862 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.888880 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.888892 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:27Z","lastTransitionTime":"2026-01-29T10:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.990926 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.990956 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.990967 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.990979 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.990988 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:27Z","lastTransitionTime":"2026-01-29T10:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.065478 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 06:03:59.082677398 +0000 UTC Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.074817 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.074917 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:28 crc kubenswrapper[4593]: E0129 10:59:28.075051 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.075078 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:28 crc kubenswrapper[4593]: E0129 10:59:28.075235 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:28 crc kubenswrapper[4593]: E0129 10:59:28.075302 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.093147 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.093181 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.093190 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.093204 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.093212 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:28Z","lastTransitionTime":"2026-01-29T10:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.195547 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.195612 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.195622 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.195651 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.195661 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:28Z","lastTransitionTime":"2026-01-29T10:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.298114 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.298154 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.298167 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.298191 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.298204 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:28Z","lastTransitionTime":"2026-01-29T10:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.319659 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovnkube-controller/0.log" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.322588 4593 generic.go:334] "Generic (PLEG): container finished" podID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerID="da8e34eb56377e17ccba577d6fb9126cfb4d73d1e821ea70de3089d83bbbb8c0" exitCode=1 Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.322666 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerDied","Data":"da8e34eb56377e17ccba577d6fb9126cfb4d73d1e821ea70de3089d83bbbb8c0"} Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.323778 4593 scope.go:117] "RemoveContainer" containerID="da8e34eb56377e17ccba577d6fb9126cfb4d73d1e821ea70de3089d83bbbb8c0" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.340551 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:28Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.355344 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:28Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.370034 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:28Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.380306 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:28Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.394060 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:28Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.400480 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.400508 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.400518 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.400531 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.400540 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:28Z","lastTransitionTime":"2026-01-29T10:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.408217 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:28Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.421395 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:28Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.430969 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:28Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.448157 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:28Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.463087 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:28Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.476921 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:28Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.490918 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:28Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.502909 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.502945 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.502959 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.502979 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.502990 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:28Z","lastTransitionTime":"2026-01-29T10:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.508522 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:28Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.525506 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8e34eb56377e17ccba577d6fb9126cfb4d73d1e821ea70de3089d83bbbb8c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8e34eb56377e17ccba577d6fb9126cfb4d73d1e821ea70de3089d83bbbb8c0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:28Z\\\",\\\"message\\\":\\\"opping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.043823 5788 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 10:59:28.044108 5788 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.044208 5788 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.044719 5788 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 10:59:28.044740 5788 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 10:59:28.044764 5788 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 10:59:28.044773 5788 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 10:59:28.044795 5788 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 10:59:28.044801 5788 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 10:59:28.044809 5788 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 10:59:28.044824 5788 factory.go:656] Stopping watch factory\\\\nI0129 10:59:28.044835 5788 ovnkube.go:599] Stopped ovnkube\\\\nI0129 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:28Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.606141 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.606183 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.606192 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.606215 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.606225 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:28Z","lastTransitionTime":"2026-01-29T10:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.708665 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.708698 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.708710 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.708725 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.708734 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:28Z","lastTransitionTime":"2026-01-29T10:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.810822 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.810853 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.810862 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.810874 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.810884 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:28Z","lastTransitionTime":"2026-01-29T10:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.913276 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.913317 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.913327 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.913342 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.913352 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:28Z","lastTransitionTime":"2026-01-29T10:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.016407 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.016454 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.016462 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.016476 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.016485 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:29Z","lastTransitionTime":"2026-01-29T10:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.026844 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424"] Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.027274 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.029252 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.029453 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.041011 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.054169 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.065982 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 11:13:39.311043599 +0000 UTC Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.067039 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.086796 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.102983 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8e34eb56377e17ccba577d6fb9126cfb4d73d1e821ea70de3089d83bbbb8c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8e34eb56377e17ccba577d6fb9126cfb4d73d1e821ea70de3089d83bbbb8c0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:28Z\\\",\\\"message\\\":\\\"opping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.043823 5788 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 10:59:28.044108 5788 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.044208 5788 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.044719 5788 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 10:59:28.044740 5788 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 10:59:28.044764 5788 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 10:59:28.044773 5788 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 10:59:28.044795 5788 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 10:59:28.044801 5788 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 10:59:28.044809 5788 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 10:59:28.044824 5788 factory.go:656] Stopping watch factory\\\\nI0129 10:59:28.044835 5788 ovnkube.go:599] Stopped ovnkube\\\\nI0129 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.117223 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.118785 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.118833 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.118844 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.118862 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.118874 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:29Z","lastTransitionTime":"2026-01-29T10:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.131078 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.144202 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/47b33c04-1415-41d1-9264-1c4b9de87fff-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qb424\" (UID: \"47b33c04-1415-41d1-9264-1c4b9de87fff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.144266 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/47b33c04-1415-41d1-9264-1c4b9de87fff-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qb424\" (UID: \"47b33c04-1415-41d1-9264-1c4b9de87fff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.144301 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/47b33c04-1415-41d1-9264-1c4b9de87fff-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qb424\" (UID: \"47b33c04-1415-41d1-9264-1c4b9de87fff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.144278 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.144360 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fhqm\" (UniqueName: \"kubernetes.io/projected/47b33c04-1415-41d1-9264-1c4b9de87fff-kube-api-access-8fhqm\") pod \"ovnkube-control-plane-749d76644c-qb424\" (UID: \"47b33c04-1415-41d1-9264-1c4b9de87fff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.156856 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.169435 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.181664 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.190668 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.204972 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.219148 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.220452 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.220486 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.220494 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.220507 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.220528 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:29Z","lastTransitionTime":"2026-01-29T10:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.230596 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.244914 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fhqm\" (UniqueName: \"kubernetes.io/projected/47b33c04-1415-41d1-9264-1c4b9de87fff-kube-api-access-8fhqm\") pod \"ovnkube-control-plane-749d76644c-qb424\" (UID: \"47b33c04-1415-41d1-9264-1c4b9de87fff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.244975 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/47b33c04-1415-41d1-9264-1c4b9de87fff-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qb424\" (UID: \"47b33c04-1415-41d1-9264-1c4b9de87fff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.245090 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/47b33c04-1415-41d1-9264-1c4b9de87fff-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qb424\" (UID: \"47b33c04-1415-41d1-9264-1c4b9de87fff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.245127 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/47b33c04-1415-41d1-9264-1c4b9de87fff-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qb424\" (UID: \"47b33c04-1415-41d1-9264-1c4b9de87fff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.245789 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/47b33c04-1415-41d1-9264-1c4b9de87fff-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qb424\" (UID: \"47b33c04-1415-41d1-9264-1c4b9de87fff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.246345 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/47b33c04-1415-41d1-9264-1c4b9de87fff-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qb424\" (UID: \"47b33c04-1415-41d1-9264-1c4b9de87fff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.255018 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/47b33c04-1415-41d1-9264-1c4b9de87fff-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qb424\" (UID: \"47b33c04-1415-41d1-9264-1c4b9de87fff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.274173 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fhqm\" (UniqueName: \"kubernetes.io/projected/47b33c04-1415-41d1-9264-1c4b9de87fff-kube-api-access-8fhqm\") pod \"ovnkube-control-plane-749d76644c-qb424\" (UID: \"47b33c04-1415-41d1-9264-1c4b9de87fff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.328318 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.328375 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.328422 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.328443 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.328459 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:29Z","lastTransitionTime":"2026-01-29T10:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.330446 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovnkube-controller/1.log" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.331163 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovnkube-controller/0.log" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.334033 4593 generic.go:334] "Generic (PLEG): container finished" podID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerID="bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb" exitCode=1 Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.334063 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerDied","Data":"bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb"} Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.334092 4593 scope.go:117] "RemoveContainer" containerID="da8e34eb56377e17ccba577d6fb9126cfb4d73d1e821ea70de3089d83bbbb8c0" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.335892 4593 scope.go:117] "RemoveContainer" containerID="bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb" Jan 29 10:59:29 crc kubenswrapper[4593]: E0129 10:59:29.336104 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.340180 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.352393 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.368188 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.389676 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8e34eb56377e17ccba577d6fb9126cfb4d73d1e821ea70de3089d83bbbb8c0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:28Z\\\",\\\"message\\\":\\\"opping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.043823 5788 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 10:59:28.044108 5788 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.044208 5788 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.044719 5788 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 10:59:28.044740 5788 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 10:59:28.044764 5788 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 10:59:28.044773 5788 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 10:59:28.044795 5788 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 10:59:28.044801 5788 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 10:59:28.044809 5788 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 10:59:28.044824 5788 factory.go:656] Stopping watch factory\\\\nI0129 10:59:28.044835 5788 ovnkube.go:599] Stopped ovnkube\\\\nI0129 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"message\\\":\\\"twork_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 10:59:29.059862 5934 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nI0129 10:59:29.059871 5934 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 10:59:29.059877 5934 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:29.059892 5934 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0129 10:59:29.059909 5934 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 1.238243ms\\\\nI0129 10:59:29.059919 5934 services_controller.go:356] Processing sync for service openshift-service-ca-operator/metrics for network=default\\\\nF0129 10:59:29.059935 5934 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.405988 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.419689 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.431933 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.432489 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.432521 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.432530 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.432545 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.432554 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:29Z","lastTransitionTime":"2026-01-29T10:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.442251 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.453204 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.465869 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.477288 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.486432 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.497452 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.509242 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.522213 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.533486 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.534711 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.534757 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.534768 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.534785 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.534794 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:29Z","lastTransitionTime":"2026-01-29T10:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.637529 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.637557 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.637567 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.637579 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.637650 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:29Z","lastTransitionTime":"2026-01-29T10:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.739897 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.739935 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.739953 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.739971 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.739981 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:29Z","lastTransitionTime":"2026-01-29T10:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.847647 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.847687 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.847697 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.847711 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.847721 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:29Z","lastTransitionTime":"2026-01-29T10:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.949776 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.949827 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.949839 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.949857 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.949868 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:29Z","lastTransitionTime":"2026-01-29T10:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.052169 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.052242 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.052253 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.052265 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.052274 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:30Z","lastTransitionTime":"2026-01-29T10:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.067001 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 11:43:38.436055728 +0000 UTC Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.074318 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.074347 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.074372 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:30 crc kubenswrapper[4593]: E0129 10:59:30.074478 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:30 crc kubenswrapper[4593]: E0129 10:59:30.074526 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:30 crc kubenswrapper[4593]: E0129 10:59:30.074576 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.145717 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-7jm9m"] Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.146221 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:30 crc kubenswrapper[4593]: E0129 10:59:30.146292 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.154603 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.154667 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.154679 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.154694 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.154704 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:30Z","lastTransitionTime":"2026-01-29T10:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.166355 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.182291 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8e34eb56377e17ccba577d6fb9126cfb4d73d1e821ea70de3089d83bbbb8c0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:28Z\\\",\\\"message\\\":\\\"opping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.043823 5788 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 10:59:28.044108 5788 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.044208 5788 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.044719 5788 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 10:59:28.044740 5788 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 10:59:28.044764 5788 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 10:59:28.044773 5788 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 10:59:28.044795 5788 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 10:59:28.044801 5788 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 10:59:28.044809 5788 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 10:59:28.044824 5788 factory.go:656] Stopping watch factory\\\\nI0129 10:59:28.044835 5788 ovnkube.go:599] Stopped ovnkube\\\\nI0129 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"message\\\":\\\"twork_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 10:59:29.059862 5934 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nI0129 10:59:29.059871 5934 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 10:59:29.059877 5934 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:29.059892 5934 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0129 10:59:29.059909 5934 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 1.238243ms\\\\nI0129 10:59:29.059919 5934 services_controller.go:356] Processing sync for service openshift-service-ca-operator/metrics for network=default\\\\nF0129 10:59:29.059935 5934 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.196904 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.208091 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.219475 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.231169 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.241411 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.252410 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.254032 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs\") pod \"network-metrics-daemon-7jm9m\" (UID: \"7d229804-724c-4e21-89ac-e3369b615389\") " pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.254080 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t27pv\" (UniqueName: \"kubernetes.io/projected/7d229804-724c-4e21-89ac-e3369b615389-kube-api-access-t27pv\") pod \"network-metrics-daemon-7jm9m\" (UID: \"7d229804-724c-4e21-89ac-e3369b615389\") " pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.257189 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.257225 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.257235 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.257249 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.257258 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:30Z","lastTransitionTime":"2026-01-29T10:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.268933 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.281855 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.292558 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.303213 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.312648 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.323565 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.335971 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.338486 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" event={"ID":"47b33c04-1415-41d1-9264-1c4b9de87fff","Type":"ContainerStarted","Data":"573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83"} Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.338524 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" event={"ID":"47b33c04-1415-41d1-9264-1c4b9de87fff","Type":"ContainerStarted","Data":"75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83"} Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.338536 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" event={"ID":"47b33c04-1415-41d1-9264-1c4b9de87fff","Type":"ContainerStarted","Data":"fd8b7bfa9bdbb54b1d66f2071c1fd2e0fa14dee6b604c8f41f797dca0c4a3987"} Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.340307 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovnkube-controller/1.log" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.352472 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.354797 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs\") pod \"network-metrics-daemon-7jm9m\" (UID: \"7d229804-724c-4e21-89ac-e3369b615389\") " pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.354834 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t27pv\" (UniqueName: \"kubernetes.io/projected/7d229804-724c-4e21-89ac-e3369b615389-kube-api-access-t27pv\") pod \"network-metrics-daemon-7jm9m\" (UID: \"7d229804-724c-4e21-89ac-e3369b615389\") " pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:30 crc kubenswrapper[4593]: E0129 10:59:30.354901 4593 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 10:59:30 crc kubenswrapper[4593]: E0129 10:59:30.354954 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs podName:7d229804-724c-4e21-89ac-e3369b615389 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:30.854940432 +0000 UTC m=+36.727974613 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs") pod "network-metrics-daemon-7jm9m" (UID: "7d229804-724c-4e21-89ac-e3369b615389") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.358794 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.358819 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.358826 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.358838 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.358846 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:30Z","lastTransitionTime":"2026-01-29T10:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.366203 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.371035 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t27pv\" (UniqueName: \"kubernetes.io/projected/7d229804-724c-4e21-89ac-e3369b615389-kube-api-access-t27pv\") pod \"network-metrics-daemon-7jm9m\" (UID: \"7d229804-724c-4e21-89ac-e3369b615389\") " pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.381383 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.394579 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.407755 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.419671 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.430761 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.440522 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.449724 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.460619 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.460670 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.460682 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.460700 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.460711 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:30Z","lastTransitionTime":"2026-01-29T10:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.461855 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.475710 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.492267 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8e34eb56377e17ccba577d6fb9126cfb4d73d1e821ea70de3089d83bbbb8c0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:28Z\\\",\\\"message\\\":\\\"opping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.043823 5788 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 10:59:28.044108 5788 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.044208 5788 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.044719 5788 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 10:59:28.044740 5788 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 10:59:28.044764 5788 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 10:59:28.044773 5788 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 10:59:28.044795 5788 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 10:59:28.044801 5788 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 10:59:28.044809 5788 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 10:59:28.044824 5788 factory.go:656] Stopping watch factory\\\\nI0129 10:59:28.044835 5788 ovnkube.go:599] Stopped ovnkube\\\\nI0129 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"message\\\":\\\"twork_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 10:59:29.059862 5934 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nI0129 10:59:29.059871 5934 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 10:59:29.059877 5934 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:29.059892 5934 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0129 10:59:29.059909 5934 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 1.238243ms\\\\nI0129 10:59:29.059919 5934 services_controller.go:356] Processing sync for service openshift-service-ca-operator/metrics for network=default\\\\nF0129 10:59:29.059935 5934 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.502864 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.515749 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.527395 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.538911 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.550447 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.563085 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.563142 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.563154 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.563170 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.563182 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:30Z","lastTransitionTime":"2026-01-29T10:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.666212 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.666251 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.666262 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.666277 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.666288 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:30Z","lastTransitionTime":"2026-01-29T10:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.769392 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.769448 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.769467 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.769492 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.769512 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:30Z","lastTransitionTime":"2026-01-29T10:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.860228 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs\") pod \"network-metrics-daemon-7jm9m\" (UID: \"7d229804-724c-4e21-89ac-e3369b615389\") " pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:30 crc kubenswrapper[4593]: E0129 10:59:30.860513 4593 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 10:59:30 crc kubenswrapper[4593]: E0129 10:59:30.860606 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs podName:7d229804-724c-4e21-89ac-e3369b615389 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:31.860585969 +0000 UTC m=+37.733620170 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs") pod "network-metrics-daemon-7jm9m" (UID: "7d229804-724c-4e21-89ac-e3369b615389") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.872293 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.872370 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.872390 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.872413 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.872427 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:30Z","lastTransitionTime":"2026-01-29T10:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.975722 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.975774 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.975788 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.975808 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.975822 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:30Z","lastTransitionTime":"2026-01-29T10:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.067867 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 08:24:27.655288231 +0000 UTC Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.078839 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.078887 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.078902 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.078921 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.078937 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:31Z","lastTransitionTime":"2026-01-29T10:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.183324 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.183374 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.183387 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.183411 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.183424 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:31Z","lastTransitionTime":"2026-01-29T10:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.287726 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.287785 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.287803 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.287827 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.287849 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:31Z","lastTransitionTime":"2026-01-29T10:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.390807 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.390845 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.390856 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.390872 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.390883 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:31Z","lastTransitionTime":"2026-01-29T10:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.493521 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.493563 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.493572 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.493587 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.493595 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:31Z","lastTransitionTime":"2026-01-29T10:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.595964 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.596026 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.596038 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.596056 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.596070 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:31Z","lastTransitionTime":"2026-01-29T10:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.699026 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.699091 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.699114 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.699145 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.699168 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:31Z","lastTransitionTime":"2026-01-29T10:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.769379 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.769509 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:31 crc kubenswrapper[4593]: E0129 10:59:31.769618 4593 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 10:59:31 crc kubenswrapper[4593]: E0129 10:59:31.769624 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 10:59:47.769570573 +0000 UTC m=+53.642604814 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 10:59:31 crc kubenswrapper[4593]: E0129 10:59:31.769713 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:47.769692426 +0000 UTC m=+53.642726627 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.769738 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.769770 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:31 crc kubenswrapper[4593]: E0129 10:59:31.769886 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 10:59:31 crc kubenswrapper[4593]: E0129 10:59:31.769901 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 10:59:31 crc kubenswrapper[4593]: E0129 10:59:31.769913 4593 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:31 crc kubenswrapper[4593]: E0129 10:59:31.769929 4593 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 10:59:31 crc kubenswrapper[4593]: E0129 10:59:31.769948 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:47.769934152 +0000 UTC m=+53.642968363 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:31 crc kubenswrapper[4593]: E0129 10:59:31.770015 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:47.769991404 +0000 UTC m=+53.643025645 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.802730 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.803047 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.803063 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.803104 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.803117 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:31Z","lastTransitionTime":"2026-01-29T10:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.870686 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.870746 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs\") pod \"network-metrics-daemon-7jm9m\" (UID: \"7d229804-724c-4e21-89ac-e3369b615389\") " pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:31 crc kubenswrapper[4593]: E0129 10:59:31.870897 4593 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 10:59:31 crc kubenswrapper[4593]: E0129 10:59:31.870960 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs podName:7d229804-724c-4e21-89ac-e3369b615389 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:33.870942349 +0000 UTC m=+39.743976550 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs") pod "network-metrics-daemon-7jm9m" (UID: "7d229804-724c-4e21-89ac-e3369b615389") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 10:59:31 crc kubenswrapper[4593]: E0129 10:59:31.870993 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 10:59:31 crc kubenswrapper[4593]: E0129 10:59:31.871037 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 10:59:31 crc kubenswrapper[4593]: E0129 10:59:31.871056 4593 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:31 crc kubenswrapper[4593]: E0129 10:59:31.871141 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:47.871117754 +0000 UTC m=+53.744151985 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.906030 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.906121 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.906134 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.906150 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.906161 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:31Z","lastTransitionTime":"2026-01-29T10:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.009157 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.009459 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.009600 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.009776 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.009889 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:32Z","lastTransitionTime":"2026-01-29T10:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.017799 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.017935 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.018004 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.018067 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.018127 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:32Z","lastTransitionTime":"2026-01-29T10:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:32 crc kubenswrapper[4593]: E0129 10:59:32.028806 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.032748 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.032777 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.032785 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.032798 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.032808 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:32Z","lastTransitionTime":"2026-01-29T10:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:32 crc kubenswrapper[4593]: E0129 10:59:32.044480 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.048477 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.048526 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.048542 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.048569 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.048588 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:32Z","lastTransitionTime":"2026-01-29T10:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:32 crc kubenswrapper[4593]: E0129 10:59:32.063153 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.067713 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.067765 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.067777 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.067798 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.067810 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:32Z","lastTransitionTime":"2026-01-29T10:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.068444 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 22:07:00.553873919 +0000 UTC Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.074060 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.074107 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.074065 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:32 crc kubenswrapper[4593]: E0129 10:59:32.074247 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.074266 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:32 crc kubenswrapper[4593]: E0129 10:59:32.074765 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.074796 4593 scope.go:117] "RemoveContainer" containerID="68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709" Jan 29 10:59:32 crc kubenswrapper[4593]: E0129 10:59:32.074861 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:32 crc kubenswrapper[4593]: E0129 10:59:32.075018 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:32 crc kubenswrapper[4593]: E0129 10:59:32.088856 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.101798 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.101845 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.101858 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.101877 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.101891 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:32Z","lastTransitionTime":"2026-01-29T10:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:32 crc kubenswrapper[4593]: E0129 10:59:32.123701 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: E0129 10:59:32.123937 4593 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.127233 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.127304 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.127317 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.127341 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.127356 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:32Z","lastTransitionTime":"2026-01-29T10:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.229443 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.229496 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.229505 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.229519 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.229530 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:32Z","lastTransitionTime":"2026-01-29T10:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.332241 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.332269 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.332276 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.332289 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.332297 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:32Z","lastTransitionTime":"2026-01-29T10:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.351318 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.353332 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284"} Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.353699 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.370598 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.386019 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.400133 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.416736 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.428825 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.434787 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.434824 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.434836 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.434855 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.434871 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:32Z","lastTransitionTime":"2026-01-29T10:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.448674 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.463720 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.474722 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.488731 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.501849 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.519306 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.536847 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.536884 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.536894 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.536909 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.536922 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:32Z","lastTransitionTime":"2026-01-29T10:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.538734 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.555079 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.573083 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.589605 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.610856 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8e34eb56377e17ccba577d6fb9126cfb4d73d1e821ea70de3089d83bbbb8c0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:28Z\\\",\\\"message\\\":\\\"opping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.043823 5788 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 10:59:28.044108 5788 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.044208 5788 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.044719 5788 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 10:59:28.044740 5788 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 10:59:28.044764 5788 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 10:59:28.044773 5788 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 10:59:28.044795 5788 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 10:59:28.044801 5788 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 10:59:28.044809 5788 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 10:59:28.044824 5788 factory.go:656] Stopping watch factory\\\\nI0129 10:59:28.044835 5788 ovnkube.go:599] Stopped ovnkube\\\\nI0129 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"message\\\":\\\"twork_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 10:59:29.059862 5934 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nI0129 10:59:29.059871 5934 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 10:59:29.059877 5934 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:29.059892 5934 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0129 10:59:29.059909 5934 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 1.238243ms\\\\nI0129 10:59:29.059919 5934 services_controller.go:356] Processing sync for service openshift-service-ca-operator/metrics for network=default\\\\nF0129 10:59:29.059935 5934 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.639121 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.639339 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.639428 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.639546 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.639623 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:32Z","lastTransitionTime":"2026-01-29T10:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.742491 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.742546 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.742556 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.742569 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.742580 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:32Z","lastTransitionTime":"2026-01-29T10:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.845085 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.845126 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.845134 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.845148 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.845156 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:32Z","lastTransitionTime":"2026-01-29T10:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.947083 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.947116 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.947124 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.947137 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.947145 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:32Z","lastTransitionTime":"2026-01-29T10:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.049958 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.050026 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.050059 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.050085 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.050102 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:33Z","lastTransitionTime":"2026-01-29T10:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.069305 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 01:49:32.656366301 +0000 UTC Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.153009 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.153079 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.153091 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.153113 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.153125 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:33Z","lastTransitionTime":"2026-01-29T10:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.256247 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.256301 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.256312 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.256330 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.256341 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:33Z","lastTransitionTime":"2026-01-29T10:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.358903 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.358943 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.358954 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.358973 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.358985 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:33Z","lastTransitionTime":"2026-01-29T10:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.461131 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.461169 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.461179 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.461197 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.461210 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:33Z","lastTransitionTime":"2026-01-29T10:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.563218 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.563255 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.563266 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.563282 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.563293 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:33Z","lastTransitionTime":"2026-01-29T10:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.665663 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.665715 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.665729 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.665746 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.665756 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:33Z","lastTransitionTime":"2026-01-29T10:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.768183 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.768258 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.768283 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.768348 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.768374 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:33Z","lastTransitionTime":"2026-01-29T10:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.871314 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.871391 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.871418 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.871451 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.871492 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:33Z","lastTransitionTime":"2026-01-29T10:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.890836 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs\") pod \"network-metrics-daemon-7jm9m\" (UID: \"7d229804-724c-4e21-89ac-e3369b615389\") " pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:33 crc kubenswrapper[4593]: E0129 10:59:33.891013 4593 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 10:59:33 crc kubenswrapper[4593]: E0129 10:59:33.891060 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs podName:7d229804-724c-4e21-89ac-e3369b615389 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:37.891046633 +0000 UTC m=+43.764080824 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs") pod "network-metrics-daemon-7jm9m" (UID: "7d229804-724c-4e21-89ac-e3369b615389") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.975227 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.975281 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.975296 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.975316 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.975329 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:33Z","lastTransitionTime":"2026-01-29T10:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.070158 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 16:49:14.772969829 +0000 UTC Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.074553 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.074571 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.074627 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.074678 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:34 crc kubenswrapper[4593]: E0129 10:59:34.074773 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:34 crc kubenswrapper[4593]: E0129 10:59:34.074959 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:34 crc kubenswrapper[4593]: E0129 10:59:34.075034 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:34 crc kubenswrapper[4593]: E0129 10:59:34.075111 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.078458 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.078505 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.078522 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.078544 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.078561 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:34Z","lastTransitionTime":"2026-01-29T10:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.181081 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.181165 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.181183 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.181210 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.181230 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:34Z","lastTransitionTime":"2026-01-29T10:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.283424 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.283477 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.283489 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.283509 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.283521 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:34Z","lastTransitionTime":"2026-01-29T10:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.386809 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.386850 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.386861 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.386878 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.386889 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:34Z","lastTransitionTime":"2026-01-29T10:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.489985 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.490020 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.490028 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.490042 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.490050 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:34Z","lastTransitionTime":"2026-01-29T10:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.592294 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.592330 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.592340 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.592353 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.592362 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:34Z","lastTransitionTime":"2026-01-29T10:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.695311 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.695358 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.695370 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.695386 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.695398 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:34Z","lastTransitionTime":"2026-01-29T10:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.797701 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.797750 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.797762 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.797777 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.797790 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:34Z","lastTransitionTime":"2026-01-29T10:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.899549 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.899588 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.899600 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.899614 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.899624 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:34Z","lastTransitionTime":"2026-01-29T10:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.002846 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.002886 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.002896 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.002910 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.002921 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:35Z","lastTransitionTime":"2026-01-29T10:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.070719 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 00:18:20.710425674 +0000 UTC Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.087757 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:35Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.102104 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:35Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.104574 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.104615 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.104660 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.104688 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.104705 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:35Z","lastTransitionTime":"2026-01-29T10:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.117040 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:35Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.129348 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:35Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.144038 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:35Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.156208 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:35Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.174413 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:35Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.188469 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:35Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.208225 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:35Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.208614 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.208679 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.208692 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.208708 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.208719 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:35Z","lastTransitionTime":"2026-01-29T10:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.222087 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:35Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.233057 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:35Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.242910 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:35Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.253754 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:35Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.266572 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:35Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.280154 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:35Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.300835 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8e34eb56377e17ccba577d6fb9126cfb4d73d1e821ea70de3089d83bbbb8c0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:28Z\\\",\\\"message\\\":\\\"opping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.043823 5788 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 10:59:28.044108 5788 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.044208 5788 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.044719 5788 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 10:59:28.044740 5788 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 10:59:28.044764 5788 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 10:59:28.044773 5788 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 10:59:28.044795 5788 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 10:59:28.044801 5788 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 10:59:28.044809 5788 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 10:59:28.044824 5788 factory.go:656] Stopping watch factory\\\\nI0129 10:59:28.044835 5788 ovnkube.go:599] Stopped ovnkube\\\\nI0129 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"message\\\":\\\"twork_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 10:59:29.059862 5934 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nI0129 10:59:29.059871 5934 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 10:59:29.059877 5934 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:29.059892 5934 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0129 10:59:29.059909 5934 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 1.238243ms\\\\nI0129 10:59:29.059919 5934 services_controller.go:356] Processing sync for service openshift-service-ca-operator/metrics for network=default\\\\nF0129 10:59:29.059935 5934 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:35Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.311591 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.311621 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.311648 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.311665 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.311676 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:35Z","lastTransitionTime":"2026-01-29T10:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.414379 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.414425 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.414436 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.414453 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.414466 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:35Z","lastTransitionTime":"2026-01-29T10:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.517303 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.517412 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.517424 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.517443 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.517453 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:35Z","lastTransitionTime":"2026-01-29T10:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.619903 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.619960 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.619969 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.619987 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.619996 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:35Z","lastTransitionTime":"2026-01-29T10:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.722961 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.723005 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.723013 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.723027 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.723036 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:35Z","lastTransitionTime":"2026-01-29T10:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.826492 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.826545 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.826553 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.826567 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.826576 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:35Z","lastTransitionTime":"2026-01-29T10:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.930100 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.930141 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.930156 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.930172 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.930181 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:35Z","lastTransitionTime":"2026-01-29T10:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.033675 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.033710 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.033718 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.033734 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.033744 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:36Z","lastTransitionTime":"2026-01-29T10:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.071777 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 19:38:04.947927552 +0000 UTC Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.074108 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.074130 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:36 crc kubenswrapper[4593]: E0129 10:59:36.074294 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.074770 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:36 crc kubenswrapper[4593]: E0129 10:59:36.074875 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.074931 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:36 crc kubenswrapper[4593]: E0129 10:59:36.075007 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:36 crc kubenswrapper[4593]: E0129 10:59:36.075131 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.137037 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.137083 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.137094 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.137113 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.137127 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:36Z","lastTransitionTime":"2026-01-29T10:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.239877 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.239974 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.239990 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.240016 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.240042 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:36Z","lastTransitionTime":"2026-01-29T10:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.343129 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.343175 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.343185 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.343203 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.343215 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:36Z","lastTransitionTime":"2026-01-29T10:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.445289 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.445348 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.445357 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.445380 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.445392 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:36Z","lastTransitionTime":"2026-01-29T10:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.548568 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.548619 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.548654 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.548678 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.548691 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:36Z","lastTransitionTime":"2026-01-29T10:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.651366 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.651427 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.651444 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.651467 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.651484 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:36Z","lastTransitionTime":"2026-01-29T10:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.753390 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.753421 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.753431 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.753445 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.753454 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:36Z","lastTransitionTime":"2026-01-29T10:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.856490 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.856552 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.856573 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.856600 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.856671 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:36Z","lastTransitionTime":"2026-01-29T10:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.959087 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.959137 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.959149 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.959167 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.959181 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:36Z","lastTransitionTime":"2026-01-29T10:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.061045 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.061083 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.061093 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.061109 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.061119 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:37Z","lastTransitionTime":"2026-01-29T10:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.072684 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 00:43:23.583581723 +0000 UTC Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.164316 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.164345 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.164355 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.164370 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.164381 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:37Z","lastTransitionTime":"2026-01-29T10:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.266831 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.266875 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.266887 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.266903 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.266916 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:37Z","lastTransitionTime":"2026-01-29T10:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.369805 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.369850 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.369861 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.369875 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.369887 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:37Z","lastTransitionTime":"2026-01-29T10:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.471676 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.471711 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.471721 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.471733 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.471742 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:37Z","lastTransitionTime":"2026-01-29T10:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.574527 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.574559 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.574569 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.574583 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.574592 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:37Z","lastTransitionTime":"2026-01-29T10:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.677094 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.677123 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.677132 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.677148 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.677157 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:37Z","lastTransitionTime":"2026-01-29T10:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.779378 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.779436 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.779456 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.779480 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.779495 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:37Z","lastTransitionTime":"2026-01-29T10:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.882276 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.882311 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.882320 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.882333 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.882341 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:37Z","lastTransitionTime":"2026-01-29T10:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.931154 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs\") pod \"network-metrics-daemon-7jm9m\" (UID: \"7d229804-724c-4e21-89ac-e3369b615389\") " pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:37 crc kubenswrapper[4593]: E0129 10:59:37.931404 4593 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 10:59:37 crc kubenswrapper[4593]: E0129 10:59:37.931488 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs podName:7d229804-724c-4e21-89ac-e3369b615389 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:45.931466846 +0000 UTC m=+51.804501047 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs") pod "network-metrics-daemon-7jm9m" (UID: "7d229804-724c-4e21-89ac-e3369b615389") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.984963 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.984999 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.985008 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.985021 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.985030 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:37Z","lastTransitionTime":"2026-01-29T10:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.073756 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 07:02:17.621483792 +0000 UTC Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.074385 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.074495 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:38 crc kubenswrapper[4593]: E0129 10:59:38.074587 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 10:59:38 crc kubenswrapper[4593]: E0129 10:59:38.074719 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.074461 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:38 crc kubenswrapper[4593]: E0129 10:59:38.074884 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.074784 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:38 crc kubenswrapper[4593]: E0129 10:59:38.075144 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.087719 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.087760 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.087769 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.087783 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.087793 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:38Z","lastTransitionTime":"2026-01-29T10:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.190314 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.190353 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.190365 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.190382 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.190393 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:38Z","lastTransitionTime":"2026-01-29T10:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.292988 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.293032 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.293046 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.293065 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.293084 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:38Z","lastTransitionTime":"2026-01-29T10:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.395939 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.396000 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.396015 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.396034 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.396050 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:38Z","lastTransitionTime":"2026-01-29T10:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.498490 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.498530 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.498541 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.498555 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.498567 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:38Z","lastTransitionTime":"2026-01-29T10:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.601128 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.601188 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.601214 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.601253 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.601274 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:38Z","lastTransitionTime":"2026-01-29T10:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.705021 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.705058 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.705066 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.705078 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.705089 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:38Z","lastTransitionTime":"2026-01-29T10:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.808516 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.808582 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.808695 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.808727 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.808748 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:38Z","lastTransitionTime":"2026-01-29T10:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.912065 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.912115 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.912134 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.912157 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.912175 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:38Z","lastTransitionTime":"2026-01-29T10:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.015727 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.015788 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.015804 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.015829 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.015846 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:39Z","lastTransitionTime":"2026-01-29T10:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.074267 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 18:46:12.783600905 +0000 UTC Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.117991 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.118079 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.118100 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.118129 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.118150 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:39Z","lastTransitionTime":"2026-01-29T10:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.221124 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.221230 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.221241 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.221254 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.221264 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:39Z","lastTransitionTime":"2026-01-29T10:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.324049 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.324100 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.324113 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.324131 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.324142 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:39Z","lastTransitionTime":"2026-01-29T10:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.427118 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.427163 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.427175 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.427196 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.427208 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:39Z","lastTransitionTime":"2026-01-29T10:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.530450 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.530753 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.530837 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.530934 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.531055 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:39Z","lastTransitionTime":"2026-01-29T10:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.633496 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.633548 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.633560 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.633575 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.633586 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:39Z","lastTransitionTime":"2026-01-29T10:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.737062 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.737516 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.737739 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.737884 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.738020 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:39Z","lastTransitionTime":"2026-01-29T10:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.840827 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.840868 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.840881 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.840897 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.840909 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:39Z","lastTransitionTime":"2026-01-29T10:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.943457 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.943728 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.943813 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.943911 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.943973 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:39Z","lastTransitionTime":"2026-01-29T10:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.046363 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.046400 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.046409 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.046423 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.046435 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:40Z","lastTransitionTime":"2026-01-29T10:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.074329 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.074387 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:40 crc kubenswrapper[4593]: E0129 10:59:40.074463 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.074481 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:40 crc kubenswrapper[4593]: E0129 10:59:40.074657 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.074340 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.074812 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 17:03:01.973864181 +0000 UTC Jan 29 10:59:40 crc kubenswrapper[4593]: E0129 10:59:40.074871 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:40 crc kubenswrapper[4593]: E0129 10:59:40.074914 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.148662 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.148720 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.148737 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.148759 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.148774 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:40Z","lastTransitionTime":"2026-01-29T10:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.252460 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.252521 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.252546 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.252577 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.252601 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:40Z","lastTransitionTime":"2026-01-29T10:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.355506 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.355781 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.355932 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.356063 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.356175 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:40Z","lastTransitionTime":"2026-01-29T10:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.459758 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.459808 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.459822 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.459845 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.459863 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:40Z","lastTransitionTime":"2026-01-29T10:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.563202 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.563620 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.563852 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.564012 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.564158 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:40Z","lastTransitionTime":"2026-01-29T10:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.667883 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.667946 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.667962 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.667986 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.668005 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:40Z","lastTransitionTime":"2026-01-29T10:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.770526 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.770564 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.770576 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.770593 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.770605 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:40Z","lastTransitionTime":"2026-01-29T10:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.873564 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.873730 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.873755 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.873787 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.873807 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:40Z","lastTransitionTime":"2026-01-29T10:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.976874 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.976917 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.976926 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.976941 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.976952 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:40Z","lastTransitionTime":"2026-01-29T10:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.075162 4593 scope.go:117] "RemoveContainer" containerID="bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.075302 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 08:20:52.718623952 +0000 UTC Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.079316 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.079458 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.079478 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.079493 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.079503 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:41Z","lastTransitionTime":"2026-01-29T10:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.088209 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.105026 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.118882 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.130472 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.141930 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.153665 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.169836 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.182935 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.182969 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.182983 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.183001 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.183012 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:41Z","lastTransitionTime":"2026-01-29T10:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.184983 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.203692 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.218180 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.234912 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.248856 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.259797 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.273599 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.284895 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.284933 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.284944 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.284966 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.284976 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:41Z","lastTransitionTime":"2026-01-29T10:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.288767 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.311573 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"message\\\":\\\"twork_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 10:59:29.059862 5934 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nI0129 10:59:29.059871 5934 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 10:59:29.059877 5934 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:29.059892 5934 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0129 10:59:29.059909 5934 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 1.238243ms\\\\nI0129 10:59:29.059919 5934 services_controller.go:356] Processing sync for service openshift-service-ca-operator/metrics for network=default\\\\nF0129 10:59:29.059935 5934 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.386701 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.386982 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.387067 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.387145 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.387236 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:41Z","lastTransitionTime":"2026-01-29T10:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.387461 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovnkube-controller/1.log" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.389219 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerStarted","Data":"b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607"} Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.390028 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.408774 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.422459 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.440469 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.456311 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.466138 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.476732 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.485520 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.488896 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.488924 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.488933 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.488946 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.488956 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:41Z","lastTransitionTime":"2026-01-29T10:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.497475 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.515827 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"message\\\":\\\"twork_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 10:59:29.059862 5934 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nI0129 10:59:29.059871 5934 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 10:59:29.059877 5934 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:29.059892 5934 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0129 10:59:29.059909 5934 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 1.238243ms\\\\nI0129 10:59:29.059919 5934 services_controller.go:356] Processing sync for service openshift-service-ca-operator/metrics for network=default\\\\nF0129 10:59:29.059935 5934 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.528028 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.541826 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.554720 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.566419 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.578689 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.590965 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.591005 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.591018 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.591037 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.591049 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:41Z","lastTransitionTime":"2026-01-29T10:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.596999 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.608062 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.693145 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.693190 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.693198 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.693213 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.693221 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:41Z","lastTransitionTime":"2026-01-29T10:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.795115 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.795322 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.795409 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.795493 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.795590 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:41Z","lastTransitionTime":"2026-01-29T10:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.897918 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.897953 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.897962 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.897975 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.897985 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:41Z","lastTransitionTime":"2026-01-29T10:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.000576 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.000614 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.000644 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.000667 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.000678 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:42Z","lastTransitionTime":"2026-01-29T10:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.074149 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.074211 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.074159 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.074154 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:42 crc kubenswrapper[4593]: E0129 10:59:42.074303 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:42 crc kubenswrapper[4593]: E0129 10:59:42.074380 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 10:59:42 crc kubenswrapper[4593]: E0129 10:59:42.074453 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:42 crc kubenswrapper[4593]: E0129 10:59:42.074583 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.076385 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 17:50:46.244284219 +0000 UTC Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.103299 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.103344 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.103357 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.103376 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.103392 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:42Z","lastTransitionTime":"2026-01-29T10:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.205866 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.205914 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.205927 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.205946 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.205960 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:42Z","lastTransitionTime":"2026-01-29T10:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.308827 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.308897 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.308930 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.308964 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.308987 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:42Z","lastTransitionTime":"2026-01-29T10:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.396370 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovnkube-controller/2.log" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.397671 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovnkube-controller/1.log" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.401748 4593 generic.go:334] "Generic (PLEG): container finished" podID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerID="b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607" exitCode=1 Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.401803 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerDied","Data":"b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607"} Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.401840 4593 scope.go:117] "RemoveContainer" containerID="bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.402823 4593 scope.go:117] "RemoveContainer" containerID="b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607" Jan 29 10:59:42 crc kubenswrapper[4593]: E0129 10:59:42.403176 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.411742 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.411893 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.411979 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.412071 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.412150 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:42Z","lastTransitionTime":"2026-01-29T10:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.420134 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.424760 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.424806 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.424815 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.424828 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.424836 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:42Z","lastTransitionTime":"2026-01-29T10:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:42 crc kubenswrapper[4593]: E0129 10:59:42.435667 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.437333 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.440775 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.440857 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.440880 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.440895 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.440904 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:42Z","lastTransitionTime":"2026-01-29T10:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.449250 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: E0129 10:59:42.460315 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.468763 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.470612 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.470693 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.470721 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.470781 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.470798 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:42Z","lastTransitionTime":"2026-01-29T10:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.481898 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: E0129 10:59:42.484893 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.488462 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.488495 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.488507 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.488521 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.488530 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:42Z","lastTransitionTime":"2026-01-29T10:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.497399 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: E0129 10:59:42.499826 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.503081 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.503229 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.503327 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.503417 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.503508 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:42Z","lastTransitionTime":"2026-01-29T10:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.508325 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: E0129 10:59:42.513983 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: E0129 10:59:42.514140 4593 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.515712 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.515748 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.515760 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.515777 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.515790 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:42Z","lastTransitionTime":"2026-01-29T10:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.519285 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.531347 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.545338 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.561741 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"message\\\":\\\"twork_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 10:59:29.059862 5934 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nI0129 10:59:29.059871 5934 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 10:59:29.059877 5934 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:29.059892 5934 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0129 10:59:29.059909 5934 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 1.238243ms\\\\nI0129 10:59:29.059919 5934 services_controller.go:356] Processing sync for service openshift-service-ca-operator/metrics for network=default\\\\nF0129 10:59:29.059935 5934 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:41Z\\\",\\\"message\\\":\\\"rafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 10:59:41.796501 6142 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:41.794423 6142 services_controller.go:360] Finished syncing service downloads on namespace openshift-console for network=default : 2.278542ms\\\\nI0129 10:59:41.798050 6142 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 10:59:41.798234 6142 services_controller.go:356] Processing sync for service openshift-console-operator/metrics for network=default\\\\nF0129 10:59:41.798242 6142 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.574436 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.586574 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.598721 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.607939 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.617311 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.617920 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.617954 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.617965 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.617984 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.617996 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:42Z","lastTransitionTime":"2026-01-29T10:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.719825 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.719883 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.719900 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.719923 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.719940 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:42Z","lastTransitionTime":"2026-01-29T10:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.823330 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.823617 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.823755 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.823899 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.823991 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:42Z","lastTransitionTime":"2026-01-29T10:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.927073 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.927372 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.927511 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.927609 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.927753 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:42Z","lastTransitionTime":"2026-01-29T10:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.030212 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.030272 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.030285 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.030303 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.030316 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:43Z","lastTransitionTime":"2026-01-29T10:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.076621 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 17:22:33.883238222 +0000 UTC Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.132264 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.132309 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.132321 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.132340 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.132352 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:43Z","lastTransitionTime":"2026-01-29T10:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.233986 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.234237 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.234348 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.234456 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.234552 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:43Z","lastTransitionTime":"2026-01-29T10:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.337172 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.337217 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.337228 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.337245 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.337256 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:43Z","lastTransitionTime":"2026-01-29T10:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.406746 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovnkube-controller/2.log" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.410929 4593 scope.go:117] "RemoveContainer" containerID="b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607" Jan 29 10:59:43 crc kubenswrapper[4593]: E0129 10:59:43.411352 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.422651 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:43Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.439364 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:43Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.440313 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.440351 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.440360 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.440374 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.440384 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:43Z","lastTransitionTime":"2026-01-29T10:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.454492 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:43Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.467232 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:43Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.484227 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:43Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.496332 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:43Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.508089 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:43Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.518658 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:43Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.529508 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:43Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.542976 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.543028 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.543039 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.543054 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.543066 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:43Z","lastTransitionTime":"2026-01-29T10:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.544252 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:43Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.564973 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:41Z\\\",\\\"message\\\":\\\"rafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 10:59:41.796501 6142 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:41.794423 6142 services_controller.go:360] Finished syncing service downloads on namespace openshift-console for network=default : 2.278542ms\\\\nI0129 10:59:41.798050 6142 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 10:59:41.798234 6142 services_controller.go:356] Processing sync for service openshift-console-operator/metrics for network=default\\\\nF0129 10:59:41.798242 6142 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:43Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.577413 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:43Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.590895 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:43Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.606814 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:43Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.616770 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:43Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.627100 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:43Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.644624 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.644665 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.644683 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.644701 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.644712 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:43Z","lastTransitionTime":"2026-01-29T10:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.747625 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.747687 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.747697 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.747716 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.747731 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:43Z","lastTransitionTime":"2026-01-29T10:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.850432 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.850754 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.850854 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.850942 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.851026 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:43Z","lastTransitionTime":"2026-01-29T10:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.953460 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.953500 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.953513 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.953529 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.953540 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:43Z","lastTransitionTime":"2026-01-29T10:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.056114 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.056143 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.056150 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.056164 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.056172 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:44Z","lastTransitionTime":"2026-01-29T10:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.074868 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.074912 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.074943 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.074875 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:44 crc kubenswrapper[4593]: E0129 10:59:44.075014 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 10:59:44 crc kubenswrapper[4593]: E0129 10:59:44.075071 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:44 crc kubenswrapper[4593]: E0129 10:59:44.075116 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:44 crc kubenswrapper[4593]: E0129 10:59:44.075154 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.076919 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 13:02:25.713674191 +0000 UTC Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.158997 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.159055 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.159070 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.159090 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.159107 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:44Z","lastTransitionTime":"2026-01-29T10:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.261267 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.261332 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.261352 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.261376 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.261399 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:44Z","lastTransitionTime":"2026-01-29T10:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.364708 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.364749 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.364761 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.364780 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.364794 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:44Z","lastTransitionTime":"2026-01-29T10:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.466860 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.466896 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.466906 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.466921 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.466933 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:44Z","lastTransitionTime":"2026-01-29T10:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.569080 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.569118 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.569128 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.569142 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.569153 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:44Z","lastTransitionTime":"2026-01-29T10:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.671156 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.671196 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.671206 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.671222 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.671234 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:44Z","lastTransitionTime":"2026-01-29T10:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.774435 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.774725 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.774830 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.774925 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.774998 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:44Z","lastTransitionTime":"2026-01-29T10:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.878153 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.878388 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.878470 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.878581 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.878703 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:44Z","lastTransitionTime":"2026-01-29T10:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.980882 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.980959 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.980983 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.981017 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.981041 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:44Z","lastTransitionTime":"2026-01-29T10:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.077047 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 08:10:54.175524064 +0000 UTC Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.083266 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.083301 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.083331 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.083347 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.083358 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:45Z","lastTransitionTime":"2026-01-29T10:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.093576 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.115842 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.177134 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:41Z\\\",\\\"message\\\":\\\"rafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 10:59:41.796501 6142 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:41.794423 6142 services_controller.go:360] Finished syncing service downloads on namespace openshift-console for network=default : 2.278542ms\\\\nI0129 10:59:41.798050 6142 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 10:59:41.798234 6142 services_controller.go:356] Processing sync for service openshift-console-operator/metrics for network=default\\\\nF0129 10:59:41.798242 6142 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.185056 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.185512 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.185810 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.185990 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.186191 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:45Z","lastTransitionTime":"2026-01-29T10:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.196823 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.215076 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.233068 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.245877 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.258413 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.273543 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.286358 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.288325 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.288355 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.288366 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.288381 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.288393 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:45Z","lastTransitionTime":"2026-01-29T10:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.298187 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.306128 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.313477 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.324315 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.339542 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.350170 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.360112 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.370121 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.379469 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.390063 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.390252 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.390351 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.390437 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.390504 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:45Z","lastTransitionTime":"2026-01-29T10:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.392740 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.404210 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.415174 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.427995 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.437458 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.447961 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.456291 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.467664 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.476739 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.490155 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.492705 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.492726 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.492733 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.492746 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.492755 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:45Z","lastTransitionTime":"2026-01-29T10:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.503964 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.516181 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.532624 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.551405 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:41Z\\\",\\\"message\\\":\\\"rafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 10:59:41.796501 6142 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:41.794423 6142 services_controller.go:360] Finished syncing service downloads on namespace openshift-console for network=default : 2.278542ms\\\\nI0129 10:59:41.798050 6142 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 10:59:41.798234 6142 services_controller.go:356] Processing sync for service openshift-console-operator/metrics for network=default\\\\nF0129 10:59:41.798242 6142 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.601052 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.601116 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.601128 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.601154 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.601166 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:45Z","lastTransitionTime":"2026-01-29T10:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.950559 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs\") pod \"network-metrics-daemon-7jm9m\" (UID: \"7d229804-724c-4e21-89ac-e3369b615389\") " pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:45 crc kubenswrapper[4593]: E0129 10:59:45.950757 4593 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 10:59:45 crc kubenswrapper[4593]: E0129 10:59:45.950807 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs podName:7d229804-724c-4e21-89ac-e3369b615389 nodeName:}" failed. No retries permitted until 2026-01-29 11:00:01.950792344 +0000 UTC m=+67.823826535 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs") pod "network-metrics-daemon-7jm9m" (UID: "7d229804-724c-4e21-89ac-e3369b615389") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.952454 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.952496 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.952506 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.952524 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.952536 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:45Z","lastTransitionTime":"2026-01-29T10:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.054400 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.054436 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.054446 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.054462 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.054471 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:46Z","lastTransitionTime":"2026-01-29T10:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.073907 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.074007 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.074015 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:46 crc kubenswrapper[4593]: E0129 10:59:46.074418 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.074091 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:46 crc kubenswrapper[4593]: E0129 10:59:46.074526 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:46 crc kubenswrapper[4593]: E0129 10:59:46.074256 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 10:59:46 crc kubenswrapper[4593]: E0129 10:59:46.074612 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.077140 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 14:29:12.001304829 +0000 UTC Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.156606 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.156717 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.156734 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.156751 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.156764 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:46Z","lastTransitionTime":"2026-01-29T10:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.259616 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.259694 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.259708 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.259725 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.259737 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:46Z","lastTransitionTime":"2026-01-29T10:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.362001 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.362042 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.362054 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.362075 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.362089 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:46Z","lastTransitionTime":"2026-01-29T10:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.464590 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.464651 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.464663 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.464680 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.464691 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:46Z","lastTransitionTime":"2026-01-29T10:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.566925 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.566963 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.566973 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.566988 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.566998 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:46Z","lastTransitionTime":"2026-01-29T10:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.670667 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.670698 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.670706 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.670720 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.670729 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:46Z","lastTransitionTime":"2026-01-29T10:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.772915 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.772965 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.773037 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.773072 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.773089 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:46Z","lastTransitionTime":"2026-01-29T10:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.876783 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.876828 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.876840 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.876855 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.876865 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:46Z","lastTransitionTime":"2026-01-29T10:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.979427 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.979461 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.979479 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.979496 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.979506 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:46Z","lastTransitionTime":"2026-01-29T10:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.078243 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 06:20:37.186400964 +0000 UTC Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.080861 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.080988 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.081102 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.081203 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.081261 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:47Z","lastTransitionTime":"2026-01-29T10:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.183204 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.183258 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.183269 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.183288 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.183302 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:47Z","lastTransitionTime":"2026-01-29T10:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.285183 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.285232 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.285246 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.285300 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.285312 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:47Z","lastTransitionTime":"2026-01-29T10:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.387429 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.387476 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.387492 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.387510 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.387524 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:47Z","lastTransitionTime":"2026-01-29T10:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.490706 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.490740 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.490749 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.490763 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.490772 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:47Z","lastTransitionTime":"2026-01-29T10:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.594192 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.594250 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.594266 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.594289 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.594307 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:47Z","lastTransitionTime":"2026-01-29T10:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.696473 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.696516 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.696548 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.696565 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.696576 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:47Z","lastTransitionTime":"2026-01-29T10:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.799831 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.799867 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.799875 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.799889 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.799899 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:47Z","lastTransitionTime":"2026-01-29T10:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.867717 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 10:59:47 crc kubenswrapper[4593]: E0129 10:59:47.867912 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:00:19.867870447 +0000 UTC m=+85.740904648 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.868020 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.868064 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.868098 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:47 crc kubenswrapper[4593]: E0129 10:59:47.868192 4593 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 10:59:47 crc kubenswrapper[4593]: E0129 10:59:47.868234 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 11:00:19.868222966 +0000 UTC m=+85.741257167 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 10:59:47 crc kubenswrapper[4593]: E0129 10:59:47.868451 4593 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 10:59:47 crc kubenswrapper[4593]: E0129 10:59:47.868486 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 11:00:19.868475853 +0000 UTC m=+85.741510054 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 10:59:47 crc kubenswrapper[4593]: E0129 10:59:47.868609 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 10:59:47 crc kubenswrapper[4593]: E0129 10:59:47.868656 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 10:59:47 crc kubenswrapper[4593]: E0129 10:59:47.868670 4593 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:47 crc kubenswrapper[4593]: E0129 10:59:47.868720 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 11:00:19.868707219 +0000 UTC m=+85.741741430 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.902873 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.902903 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.902914 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.902934 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.902945 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:47Z","lastTransitionTime":"2026-01-29T10:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.968906 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:47 crc kubenswrapper[4593]: E0129 10:59:47.969073 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 10:59:47 crc kubenswrapper[4593]: E0129 10:59:47.969094 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 10:59:47 crc kubenswrapper[4593]: E0129 10:59:47.969108 4593 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:47 crc kubenswrapper[4593]: E0129 10:59:47.969165 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 11:00:19.96915007 +0000 UTC m=+85.842184271 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.005549 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.005588 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.005598 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.005614 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.005623 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:48Z","lastTransitionTime":"2026-01-29T10:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.073836 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:48 crc kubenswrapper[4593]: E0129 10:59:48.074220 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.073927 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:48 crc kubenswrapper[4593]: E0129 10:59:48.074438 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.073880 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:48 crc kubenswrapper[4593]: E0129 10:59:48.074684 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.073944 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:48 crc kubenswrapper[4593]: E0129 10:59:48.074865 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.078573 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 13:29:48.979832326 +0000 UTC Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.108304 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.108353 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.108371 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.108392 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.108408 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:48Z","lastTransitionTime":"2026-01-29T10:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.211650 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.211682 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.211692 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.211708 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.211719 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:48Z","lastTransitionTime":"2026-01-29T10:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.313357 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.313390 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.313402 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.313416 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.313427 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:48Z","lastTransitionTime":"2026-01-29T10:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.415986 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.416020 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.416033 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.416052 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.416063 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:48Z","lastTransitionTime":"2026-01-29T10:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.519557 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.520173 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.520270 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.520340 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.520416 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:48Z","lastTransitionTime":"2026-01-29T10:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.622558 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.622604 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.622617 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.622655 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.622668 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:48Z","lastTransitionTime":"2026-01-29T10:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.725204 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.725234 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.725242 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.725256 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.725265 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:48Z","lastTransitionTime":"2026-01-29T10:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.827710 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.827741 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.827749 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.827762 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.827773 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:48Z","lastTransitionTime":"2026-01-29T10:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.930802 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.930841 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.930850 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.930867 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.930879 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:48Z","lastTransitionTime":"2026-01-29T10:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.033424 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.033460 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.033468 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.033480 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.033490 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:49Z","lastTransitionTime":"2026-01-29T10:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.079705 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 20:02:41.958062614 +0000 UTC Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.135415 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.135457 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.135468 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.135485 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.135499 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:49Z","lastTransitionTime":"2026-01-29T10:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.237804 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.237840 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.237849 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.237863 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.237874 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:49Z","lastTransitionTime":"2026-01-29T10:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.340080 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.340163 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.340176 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.340194 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.340206 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:49Z","lastTransitionTime":"2026-01-29T10:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.442363 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.442755 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.442843 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.442926 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.443003 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:49Z","lastTransitionTime":"2026-01-29T10:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.544790 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.544825 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.544835 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.544849 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.544860 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:49Z","lastTransitionTime":"2026-01-29T10:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.647727 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.647772 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.647784 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.647799 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.647809 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:49Z","lastTransitionTime":"2026-01-29T10:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.750012 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.750042 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.750055 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.750072 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.750082 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:49Z","lastTransitionTime":"2026-01-29T10:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.852688 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.852725 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.852795 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.852816 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.852827 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:49Z","lastTransitionTime":"2026-01-29T10:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.959938 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.960017 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.960145 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.960173 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.960188 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:49Z","lastTransitionTime":"2026-01-29T10:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.062675 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.062802 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.062814 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.062829 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.062840 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:50Z","lastTransitionTime":"2026-01-29T10:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.073824 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.073829 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:50 crc kubenswrapper[4593]: E0129 10:59:50.073961 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:50 crc kubenswrapper[4593]: E0129 10:59:50.074007 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.074281 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.074345 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:50 crc kubenswrapper[4593]: E0129 10:59:50.074529 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 10:59:50 crc kubenswrapper[4593]: E0129 10:59:50.074529 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.079811 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 01:13:36.407338689 +0000 UTC Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.165102 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.165147 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.165162 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.165182 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.165198 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:50Z","lastTransitionTime":"2026-01-29T10:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.268008 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.268070 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.268085 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.268108 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.268124 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:50Z","lastTransitionTime":"2026-01-29T10:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.371253 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.371294 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.371305 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.371324 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.371337 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:50Z","lastTransitionTime":"2026-01-29T10:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.474040 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.474069 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.474077 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.474104 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.474113 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:50Z","lastTransitionTime":"2026-01-29T10:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.576201 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.576245 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.576253 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.576268 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.576277 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:50Z","lastTransitionTime":"2026-01-29T10:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.678198 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.678242 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.678256 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.678271 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.678294 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:50Z","lastTransitionTime":"2026-01-29T10:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.780846 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.780885 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.780896 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.780913 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.780924 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:50Z","lastTransitionTime":"2026-01-29T10:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.882903 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.883266 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.883419 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.883499 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.883569 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:50Z","lastTransitionTime":"2026-01-29T10:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.986429 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.986473 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.986486 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.986504 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.986517 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:50Z","lastTransitionTime":"2026-01-29T10:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.080609 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 22:16:54.95095111 +0000 UTC Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.088857 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.088890 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.088902 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.088918 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.088928 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:51Z","lastTransitionTime":"2026-01-29T10:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.191989 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.192032 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.192044 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.192062 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.192074 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:51Z","lastTransitionTime":"2026-01-29T10:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.249715 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.260589 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.265081 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:51Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.277097 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:51Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.288197 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:51Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.294741 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.294781 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.294792 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.294808 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.294819 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:51Z","lastTransitionTime":"2026-01-29T10:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.301345 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:51Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.314150 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:51Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.326099 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:51Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.339024 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:51Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.349747 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:51Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.363853 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:51Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.373329 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:51Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.384248 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:51Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.393937 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:51Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.397101 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.397126 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.397134 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.397147 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.397157 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:51Z","lastTransitionTime":"2026-01-29T10:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.405332 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:51Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.426512 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:41Z\\\",\\\"message\\\":\\\"rafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 10:59:41.796501 6142 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:41.794423 6142 services_controller.go:360] Finished syncing service downloads on namespace openshift-console for network=default : 2.278542ms\\\\nI0129 10:59:41.798050 6142 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 10:59:41.798234 6142 services_controller.go:356] Processing sync for service openshift-console-operator/metrics for network=default\\\\nF0129 10:59:41.798242 6142 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:51Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.437284 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:51Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.449824 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:51Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.499171 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.499213 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.499222 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.499234 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.499243 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:51Z","lastTransitionTime":"2026-01-29T10:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.601669 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.601716 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.601733 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.601755 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.601771 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:51Z","lastTransitionTime":"2026-01-29T10:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.703991 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.704027 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.704035 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.704049 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.704060 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:51Z","lastTransitionTime":"2026-01-29T10:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.806995 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.807032 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.807044 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.807066 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.807079 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:51Z","lastTransitionTime":"2026-01-29T10:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.909442 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.909476 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.909484 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.909497 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.909506 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:51Z","lastTransitionTime":"2026-01-29T10:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.011386 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.011434 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.011448 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.011468 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.011482 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:52Z","lastTransitionTime":"2026-01-29T10:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.074166 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.074225 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.074262 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.074403 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:52 crc kubenswrapper[4593]: E0129 10:59:52.074570 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 10:59:52 crc kubenswrapper[4593]: E0129 10:59:52.075085 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:52 crc kubenswrapper[4593]: E0129 10:59:52.075253 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:52 crc kubenswrapper[4593]: E0129 10:59:52.075361 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.080875 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 01:01:26.935938047 +0000 UTC Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.114363 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.114445 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.114469 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.114491 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.114501 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:52Z","lastTransitionTime":"2026-01-29T10:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.217827 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.217906 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.217933 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.217969 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.218007 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:52Z","lastTransitionTime":"2026-01-29T10:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.320762 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.320814 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.320829 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.320851 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.320866 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:52Z","lastTransitionTime":"2026-01-29T10:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.424694 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.424750 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.424761 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.424781 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.424794 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:52Z","lastTransitionTime":"2026-01-29T10:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.527763 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.527817 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.527833 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.527855 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.527872 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:52Z","lastTransitionTime":"2026-01-29T10:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.631184 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.631259 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.631276 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.631304 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.631322 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:52Z","lastTransitionTime":"2026-01-29T10:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.734212 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.734248 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.734259 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.734274 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.734287 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:52Z","lastTransitionTime":"2026-01-29T10:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.837302 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.837340 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.837348 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.837362 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.837372 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:52Z","lastTransitionTime":"2026-01-29T10:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.891107 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.891245 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.891260 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.891278 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.891290 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:52Z","lastTransitionTime":"2026-01-29T10:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:52 crc kubenswrapper[4593]: E0129 10:59:52.905936 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:52Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.910075 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.910113 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.910123 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.910139 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.910151 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:52Z","lastTransitionTime":"2026-01-29T10:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:52 crc kubenswrapper[4593]: E0129 10:59:52.922418 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:52Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.927066 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.927119 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.927132 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.927151 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.927165 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:52Z","lastTransitionTime":"2026-01-29T10:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:52 crc kubenswrapper[4593]: E0129 10:59:52.941238 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:52Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.944469 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.944520 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.944536 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.944558 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.944574 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:52Z","lastTransitionTime":"2026-01-29T10:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:52 crc kubenswrapper[4593]: E0129 10:59:52.958554 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:52Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.962141 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.962175 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.962186 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.962202 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.962214 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:52Z","lastTransitionTime":"2026-01-29T10:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:52 crc kubenswrapper[4593]: E0129 10:59:52.982027 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:52Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:52 crc kubenswrapper[4593]: E0129 10:59:52.982258 4593 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.984514 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.984568 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.984587 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.984618 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.984717 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:52Z","lastTransitionTime":"2026-01-29T10:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.081242 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 04:07:43.082000953 +0000 UTC Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.086909 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.086951 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.086960 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.086975 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.086984 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:53Z","lastTransitionTime":"2026-01-29T10:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.190171 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.190204 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.190214 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.190230 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.190242 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:53Z","lastTransitionTime":"2026-01-29T10:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.292781 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.292811 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.292820 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.292833 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.292843 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:53Z","lastTransitionTime":"2026-01-29T10:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.396236 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.396281 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.396293 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.396308 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.396319 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:53Z","lastTransitionTime":"2026-01-29T10:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.498657 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.498699 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.498712 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.498725 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.498734 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:53Z","lastTransitionTime":"2026-01-29T10:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.600788 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.600837 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.600849 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.600864 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.600875 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:53Z","lastTransitionTime":"2026-01-29T10:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.703831 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.703881 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.703894 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.703911 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.703921 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:53Z","lastTransitionTime":"2026-01-29T10:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.806716 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.806750 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.806758 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.806771 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.806780 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:53Z","lastTransitionTime":"2026-01-29T10:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.908936 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.909027 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.909043 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.909068 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.909081 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:53Z","lastTransitionTime":"2026-01-29T10:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.012250 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.012313 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.012324 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.012347 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.012361 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:54Z","lastTransitionTime":"2026-01-29T10:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.074139 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:54 crc kubenswrapper[4593]: E0129 10:59:54.074279 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.074592 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:54 crc kubenswrapper[4593]: E0129 10:59:54.074670 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.074709 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:54 crc kubenswrapper[4593]: E0129 10:59:54.074746 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.075296 4593 scope.go:117] "RemoveContainer" containerID="b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607" Jan 29 10:59:54 crc kubenswrapper[4593]: E0129 10:59:54.075418 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.075464 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:54 crc kubenswrapper[4593]: E0129 10:59:54.075522 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.081776 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 16:35:19.863740224 +0000 UTC Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.114175 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.114235 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.114268 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.114286 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.114297 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:54Z","lastTransitionTime":"2026-01-29T10:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.217054 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.217107 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.217119 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.217138 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.217151 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:54Z","lastTransitionTime":"2026-01-29T10:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.319395 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.319422 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.319431 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.319443 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.319452 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:54Z","lastTransitionTime":"2026-01-29T10:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.421596 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.421657 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.421674 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.421691 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.421704 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:54Z","lastTransitionTime":"2026-01-29T10:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.525076 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.525101 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.525109 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.525121 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.525129 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:54Z","lastTransitionTime":"2026-01-29T10:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.627175 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.627249 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.627262 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.627279 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.627321 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:54Z","lastTransitionTime":"2026-01-29T10:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.730104 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.730144 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.730152 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.730166 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.730175 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:54Z","lastTransitionTime":"2026-01-29T10:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.832577 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.832625 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.832663 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.832685 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.832702 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:54Z","lastTransitionTime":"2026-01-29T10:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.934884 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.934927 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.934936 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.934959 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.934972 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:54Z","lastTransitionTime":"2026-01-29T10:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.037066 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.037110 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.037119 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.037135 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.037143 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:55Z","lastTransitionTime":"2026-01-29T10:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.081898 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 18:24:29.760480281 +0000 UTC Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.089593 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.103856 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.119350 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:41Z\\\",\\\"message\\\":\\\"rafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 10:59:41.796501 6142 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:41.794423 6142 services_controller.go:360] Finished syncing service downloads on namespace openshift-console for network=default : 2.278542ms\\\\nI0129 10:59:41.798050 6142 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 10:59:41.798234 6142 services_controller.go:356] Processing sync for service openshift-console-operator/metrics for network=default\\\\nF0129 10:59:41.798242 6142 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.129275 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.139386 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.139621 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.139770 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.139920 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.140072 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:55Z","lastTransitionTime":"2026-01-29T10:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.140187 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.151403 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.162624 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.172817 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.186665 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5026ebda-6390-490e-bdda-0f9a1de13f06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://770eac823720571be84970ca91371624bf9a1ef60d4c0ea4dc0011cb1319aa18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06898c0c80943cfb41dfb8b2f126694ec289f605b86e24c7df0bf68a15c4ee7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://100535b62a75f14594466d97f789106e9a51f35605ef3250a2b2e067568e6d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.197498 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.209221 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.219086 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.231584 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.240427 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.242122 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.242150 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.242162 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.242179 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.242191 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:55Z","lastTransitionTime":"2026-01-29T10:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.250722 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.259318 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.267722 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.343868 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.343916 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.343933 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.343956 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.343974 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:55Z","lastTransitionTime":"2026-01-29T10:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.446123 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.446198 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.446213 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.446232 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.446245 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:55Z","lastTransitionTime":"2026-01-29T10:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.548922 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.548963 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.548975 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.548990 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.549001 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:55Z","lastTransitionTime":"2026-01-29T10:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.651861 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.652266 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.652393 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.652495 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.652677 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:55Z","lastTransitionTime":"2026-01-29T10:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.755619 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.755908 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.755986 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.756058 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.756122 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:55Z","lastTransitionTime":"2026-01-29T10:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.859427 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.859480 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.859491 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.859511 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.859526 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:55Z","lastTransitionTime":"2026-01-29T10:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.962021 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.962230 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.962324 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.962387 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.962439 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:55Z","lastTransitionTime":"2026-01-29T10:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.065234 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.065316 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.065337 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.065365 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.065385 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:56Z","lastTransitionTime":"2026-01-29T10:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.074514 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.074547 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:56 crc kubenswrapper[4593]: E0129 10:59:56.074619 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.074661 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.074725 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:56 crc kubenswrapper[4593]: E0129 10:59:56.074720 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 10:59:56 crc kubenswrapper[4593]: E0129 10:59:56.074838 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:56 crc kubenswrapper[4593]: E0129 10:59:56.074937 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.082668 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 22:37:50.21572526 +0000 UTC Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.168147 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.168187 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.168200 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.168217 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.168229 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:56Z","lastTransitionTime":"2026-01-29T10:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.270419 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.270461 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.270472 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.270488 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.270499 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:56Z","lastTransitionTime":"2026-01-29T10:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.372924 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.372955 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.372965 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.372979 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.372989 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:56Z","lastTransitionTime":"2026-01-29T10:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.475546 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.475584 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.475593 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.475608 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.475619 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:56Z","lastTransitionTime":"2026-01-29T10:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.577945 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.577981 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.577993 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.578008 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.578019 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:56Z","lastTransitionTime":"2026-01-29T10:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.680613 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.680845 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.680963 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.681063 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.681141 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:56Z","lastTransitionTime":"2026-01-29T10:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.783936 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.784208 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.784268 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.784327 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.784395 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:56Z","lastTransitionTime":"2026-01-29T10:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.887148 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.887460 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.887527 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.887594 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.887668 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:56Z","lastTransitionTime":"2026-01-29T10:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.990490 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.990536 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.990547 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.990563 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.990574 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:56Z","lastTransitionTime":"2026-01-29T10:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.083052 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 23:57:42.839292066 +0000 UTC Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.092376 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.092434 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.092452 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.092473 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.092488 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:57Z","lastTransitionTime":"2026-01-29T10:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.194654 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.194701 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.194716 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.194738 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.194756 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:57Z","lastTransitionTime":"2026-01-29T10:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.296717 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.297133 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.297213 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.297292 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.297376 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:57Z","lastTransitionTime":"2026-01-29T10:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.400276 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.400321 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.400332 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.400348 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.400359 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:57Z","lastTransitionTime":"2026-01-29T10:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.502820 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.502875 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.502888 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.502905 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.502916 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:57Z","lastTransitionTime":"2026-01-29T10:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.605860 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.605903 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.605914 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.605929 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.605938 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:57Z","lastTransitionTime":"2026-01-29T10:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.707692 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.707935 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.708001 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.708073 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.708173 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:57Z","lastTransitionTime":"2026-01-29T10:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.810293 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.810330 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.810341 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.810358 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.810372 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:57Z","lastTransitionTime":"2026-01-29T10:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.913125 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.913154 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.913161 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.913174 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.913184 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:57Z","lastTransitionTime":"2026-01-29T10:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.015999 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.016033 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.016043 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.016056 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.016065 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:58Z","lastTransitionTime":"2026-01-29T10:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.074657 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.074691 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:58 crc kubenswrapper[4593]: E0129 10:59:58.074780 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:58 crc kubenswrapper[4593]: E0129 10:59:58.074936 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.074965 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:58 crc kubenswrapper[4593]: E0129 10:59:58.075009 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.074948 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:58 crc kubenswrapper[4593]: E0129 10:59:58.075064 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.084221 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 13:02:10.869909893 +0000 UTC Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.117977 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.118023 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.118034 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.118050 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.118061 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:58Z","lastTransitionTime":"2026-01-29T10:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.220535 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.220566 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.220576 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.220595 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.220613 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:58Z","lastTransitionTime":"2026-01-29T10:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.322590 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.322642 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.322653 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.322676 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.322687 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:58Z","lastTransitionTime":"2026-01-29T10:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.424948 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.424990 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.425001 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.425017 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.425029 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:58Z","lastTransitionTime":"2026-01-29T10:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.527409 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.527492 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.527504 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.527521 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.527533 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:58Z","lastTransitionTime":"2026-01-29T10:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.630059 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.630546 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.630699 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.630785 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.630994 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:58Z","lastTransitionTime":"2026-01-29T10:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.732826 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.732856 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.732863 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.732876 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.732884 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:58Z","lastTransitionTime":"2026-01-29T10:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.834814 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.834848 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.834858 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.834875 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.834886 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:58Z","lastTransitionTime":"2026-01-29T10:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.937240 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.937275 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.937282 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.937296 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.937307 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:58Z","lastTransitionTime":"2026-01-29T10:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.039128 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.039170 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.039184 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.039201 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.039214 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:59Z","lastTransitionTime":"2026-01-29T10:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.084825 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 03:50:42.217005253 +0000 UTC Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.141294 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.141339 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.141350 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.141366 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.141378 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:59Z","lastTransitionTime":"2026-01-29T10:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.243886 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.243929 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.243940 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.243954 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.243964 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:59Z","lastTransitionTime":"2026-01-29T10:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.346266 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.346320 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.346334 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.346353 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.346363 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:59Z","lastTransitionTime":"2026-01-29T10:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.448343 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.448647 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.448734 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.448826 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.448899 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:59Z","lastTransitionTime":"2026-01-29T10:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.551344 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.551384 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.551394 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.551409 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.551421 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:59Z","lastTransitionTime":"2026-01-29T10:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.654084 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.654125 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.654133 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.654148 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.654158 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:59Z","lastTransitionTime":"2026-01-29T10:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.756497 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.756536 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.756546 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.756559 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.756569 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:59Z","lastTransitionTime":"2026-01-29T10:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.859440 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.859473 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.859483 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.859515 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.859527 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:59Z","lastTransitionTime":"2026-01-29T10:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.962196 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.962249 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.962271 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.962299 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.962319 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:59Z","lastTransitionTime":"2026-01-29T10:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.064284 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.064324 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.064334 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.064350 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.064360 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:00Z","lastTransitionTime":"2026-01-29T11:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.074009 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:00 crc kubenswrapper[4593]: E0129 11:00:00.074146 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.074404 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:00 crc kubenswrapper[4593]: E0129 11:00:00.074480 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.074651 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:00 crc kubenswrapper[4593]: E0129 11:00:00.074733 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.074817 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:00 crc kubenswrapper[4593]: E0129 11:00:00.074935 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.085758 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 13:10:21.633505029 +0000 UTC Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.166519 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.166563 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.166574 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.166590 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.166601 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:00Z","lastTransitionTime":"2026-01-29T11:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.270030 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.270075 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.270092 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.270110 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.270120 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:00Z","lastTransitionTime":"2026-01-29T11:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.372375 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.372404 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.372413 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.372425 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.372433 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:00Z","lastTransitionTime":"2026-01-29T11:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.474618 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.474882 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.474966 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.475128 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.475223 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:00Z","lastTransitionTime":"2026-01-29T11:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.577926 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.577978 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.577990 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.578006 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.578015 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:00Z","lastTransitionTime":"2026-01-29T11:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.680456 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.680522 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.680534 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.680551 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.680573 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:00Z","lastTransitionTime":"2026-01-29T11:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.783177 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.783216 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.783227 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.783274 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.783285 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:00Z","lastTransitionTime":"2026-01-29T11:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.885372 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.885407 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.885416 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.885431 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.885439 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:00Z","lastTransitionTime":"2026-01-29T11:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.987771 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.987821 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.987831 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.987859 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.987869 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:00Z","lastTransitionTime":"2026-01-29T11:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.085900 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 11:14:24.981392472 +0000 UTC Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.089567 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.089603 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.089613 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.089647 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.089662 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:01Z","lastTransitionTime":"2026-01-29T11:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.192444 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.192489 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.192498 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.192513 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.192524 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:01Z","lastTransitionTime":"2026-01-29T11:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.294944 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.294998 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.295009 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.295027 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.295038 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:01Z","lastTransitionTime":"2026-01-29T11:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.397299 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.397344 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.397353 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.397370 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.397381 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:01Z","lastTransitionTime":"2026-01-29T11:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.499868 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.499914 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.499926 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.499944 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.499957 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:01Z","lastTransitionTime":"2026-01-29T11:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.601604 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.601675 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.601691 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.601709 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.601720 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:01Z","lastTransitionTime":"2026-01-29T11:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.704577 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.704713 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.704726 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.704741 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.704751 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:01Z","lastTransitionTime":"2026-01-29T11:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.807382 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.807438 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.807450 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.807467 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.807477 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:01Z","lastTransitionTime":"2026-01-29T11:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.909684 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.909739 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.909756 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.909779 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.909796 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:01Z","lastTransitionTime":"2026-01-29T11:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.991502 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs\") pod \"network-metrics-daemon-7jm9m\" (UID: \"7d229804-724c-4e21-89ac-e3369b615389\") " pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:01 crc kubenswrapper[4593]: E0129 11:00:01.991660 4593 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 11:00:01 crc kubenswrapper[4593]: E0129 11:00:01.991736 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs podName:7d229804-724c-4e21-89ac-e3369b615389 nodeName:}" failed. No retries permitted until 2026-01-29 11:00:33.991713607 +0000 UTC m=+99.864747868 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs") pod "network-metrics-daemon-7jm9m" (UID: "7d229804-724c-4e21-89ac-e3369b615389") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.011722 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.011763 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.011777 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.011799 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.011812 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:02Z","lastTransitionTime":"2026-01-29T11:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.074356 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.074358 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:02 crc kubenswrapper[4593]: E0129 11:00:02.074805 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.074453 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:02 crc kubenswrapper[4593]: E0129 11:00:02.075109 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.074378 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:02 crc kubenswrapper[4593]: E0129 11:00:02.075282 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:02 crc kubenswrapper[4593]: E0129 11:00:02.074935 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.086663 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 15:33:14.312967784 +0000 UTC Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.114593 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.114655 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.114667 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.114686 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.114699 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:02Z","lastTransitionTime":"2026-01-29T11:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.216798 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.216831 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.216839 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.216854 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.216866 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:02Z","lastTransitionTime":"2026-01-29T11:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.319374 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.319413 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.319424 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.319438 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.319449 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:02Z","lastTransitionTime":"2026-01-29T11:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.421826 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.421865 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.421876 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.421890 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.421899 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:02Z","lastTransitionTime":"2026-01-29T11:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.524139 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.524186 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.524195 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.524212 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.524221 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:02Z","lastTransitionTime":"2026-01-29T11:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.626696 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.626770 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.626782 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.626796 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.626807 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:02Z","lastTransitionTime":"2026-01-29T11:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.729010 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.729045 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.729056 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.729071 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.729082 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:02Z","lastTransitionTime":"2026-01-29T11:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.831994 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.832237 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.832357 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.832443 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.832516 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:02Z","lastTransitionTime":"2026-01-29T11:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.935114 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.935155 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.935169 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.935185 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.935196 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:02Z","lastTransitionTime":"2026-01-29T11:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.037392 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.037430 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.037440 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.037454 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.037462 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:03Z","lastTransitionTime":"2026-01-29T11:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.087878 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 17:39:15.322904422 +0000 UTC Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.140020 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.140053 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.140065 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.140081 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.140093 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:03Z","lastTransitionTime":"2026-01-29T11:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.242108 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.242149 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.242160 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.242178 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.242190 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:03Z","lastTransitionTime":"2026-01-29T11:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.344705 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.344734 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.344744 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.344757 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.344766 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:03Z","lastTransitionTime":"2026-01-29T11:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.345594 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.345619 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.345670 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.345680 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.345689 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:03Z","lastTransitionTime":"2026-01-29T11:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:03 crc kubenswrapper[4593]: E0129 11:00:03.356910 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:03Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.360216 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.360271 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.360283 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.360299 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.360310 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:03Z","lastTransitionTime":"2026-01-29T11:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:03 crc kubenswrapper[4593]: E0129 11:00:03.371493 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:03Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.377389 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.377426 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.377437 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.377454 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.377465 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:03Z","lastTransitionTime":"2026-01-29T11:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:03 crc kubenswrapper[4593]: E0129 11:00:03.388541 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:03Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.391233 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.391320 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.391387 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.391450 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.391503 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:03Z","lastTransitionTime":"2026-01-29T11:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:03 crc kubenswrapper[4593]: E0129 11:00:03.403730 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:03Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.406970 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.407082 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.407162 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.407242 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.407312 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:03Z","lastTransitionTime":"2026-01-29T11:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:03 crc kubenswrapper[4593]: E0129 11:00:03.420027 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:03Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:03 crc kubenswrapper[4593]: E0129 11:00:03.420487 4593 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.447831 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.447892 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.447903 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.447918 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.447929 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:03Z","lastTransitionTime":"2026-01-29T11:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.550557 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.550832 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.550932 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.551027 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.551113 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:03Z","lastTransitionTime":"2026-01-29T11:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.654446 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.654483 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.654494 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.654512 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.654522 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:03Z","lastTransitionTime":"2026-01-29T11:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.756522 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.756555 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.756563 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.756577 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.756585 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:03Z","lastTransitionTime":"2026-01-29T11:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.859199 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.859237 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.859245 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.859258 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.859268 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:03Z","lastTransitionTime":"2026-01-29T11:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.961504 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.961550 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.961561 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.961577 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.961591 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:03Z","lastTransitionTime":"2026-01-29T11:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.064009 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.064064 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.064076 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.064115 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.064130 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:04Z","lastTransitionTime":"2026-01-29T11:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.074486 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.074516 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.074601 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:04 crc kubenswrapper[4593]: E0129 11:00:04.074721 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.074768 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:04 crc kubenswrapper[4593]: E0129 11:00:04.074863 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:04 crc kubenswrapper[4593]: E0129 11:00:04.074947 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:04 crc kubenswrapper[4593]: E0129 11:00:04.074987 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.088174 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 02:39:43.465693402 +0000 UTC Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.166491 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.166741 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.166750 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.166762 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.166770 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:04Z","lastTransitionTime":"2026-01-29T11:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.269432 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.269467 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.269475 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.269488 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.269499 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:04Z","lastTransitionTime":"2026-01-29T11:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.371449 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.371725 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.371819 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.371945 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.372035 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:04Z","lastTransitionTime":"2026-01-29T11:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.474344 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.474583 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.474706 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.474818 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.474891 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:04Z","lastTransitionTime":"2026-01-29T11:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.577648 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.577684 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.577693 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.577707 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.577717 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:04Z","lastTransitionTime":"2026-01-29T11:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.680389 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.680425 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.680436 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.680452 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.680463 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:04Z","lastTransitionTime":"2026-01-29T11:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.782916 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.782957 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.782967 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.782982 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.782995 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:04Z","lastTransitionTime":"2026-01-29T11:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.885330 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.885362 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.885371 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.885385 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.885393 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:04Z","lastTransitionTime":"2026-01-29T11:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.987710 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.987749 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.987757 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.987773 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.987784 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:04Z","lastTransitionTime":"2026-01-29T11:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.074700 4593 scope.go:117] "RemoveContainer" containerID="b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.088467 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 17:32:10.878571974 +0000 UTC Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.088459 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.089672 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.089710 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.089726 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.089742 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.089753 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:05Z","lastTransitionTime":"2026-01-29T11:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.102767 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.113010 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.123256 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.138061 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.150544 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.169249 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.181447 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.191921 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.191951 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.191959 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.191973 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.191981 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:05Z","lastTransitionTime":"2026-01-29T11:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.194226 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.204966 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.215481 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5026ebda-6390-490e-bdda-0f9a1de13f06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://770eac823720571be84970ca91371624bf9a1ef60d4c0ea4dc0011cb1319aa18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06898c0c80943cfb41dfb8b2f126694ec289f605b86e24c7df0bf68a15c4ee7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://100535b62a75f14594466d97f789106e9a51f35605ef3250a2b2e067568e6d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.226778 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.239097 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.250177 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.268726 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.292838 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:41Z\\\",\\\"message\\\":\\\"rafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 10:59:41.796501 6142 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:41.794423 6142 services_controller.go:360] Finished syncing service downloads on namespace openshift-console for network=default : 2.278542ms\\\\nI0129 10:59:41.798050 6142 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 10:59:41.798234 6142 services_controller.go:356] Processing sync for service openshift-console-operator/metrics for network=default\\\\nF0129 10:59:41.798242 6142 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.296626 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.296668 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.296678 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.296693 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.296703 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:05Z","lastTransitionTime":"2026-01-29T11:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.314248 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.399188 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.399229 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.399238 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.399253 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.399263 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:05Z","lastTransitionTime":"2026-01-29T11:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.479492 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovnkube-controller/2.log" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.481817 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerStarted","Data":"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27"} Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.482482 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.483252 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xpt4q_c76afd0b-36c6-4faa-9278-c08c60c483e9/kube-multus/0.log" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.483302 4593 generic.go:334] "Generic (PLEG): container finished" podID="c76afd0b-36c6-4faa-9278-c08c60c483e9" containerID="c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08" exitCode=1 Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.483340 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xpt4q" event={"ID":"c76afd0b-36c6-4faa-9278-c08c60c483e9","Type":"ContainerDied","Data":"c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08"} Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.483757 4593 scope.go:117] "RemoveContainer" containerID="c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.500679 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.501404 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.501439 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.501447 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.501461 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.501470 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:05Z","lastTransitionTime":"2026-01-29T11:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.521655 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.548798 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.568192 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.587268 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.603425 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.603484 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.603497 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.603515 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.603526 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:05Z","lastTransitionTime":"2026-01-29T11:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.608145 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5026ebda-6390-490e-bdda-0f9a1de13f06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://770eac823720571be84970ca91371624bf9a1ef60d4c0ea4dc0011cb1319aa18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06898c0c80943cfb41dfb8b2f126694ec289f605b86e24c7df0bf68a15c4ee7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://100535b62a75f14594466d97f789106e9a51f35605ef3250a2b2e067568e6d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.621216 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.632505 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.643837 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.664619 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:41Z\\\",\\\"message\\\":\\\"rafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 10:59:41.796501 6142 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:41.794423 6142 services_controller.go:360] Finished syncing service downloads on namespace openshift-console for network=default : 2.278542ms\\\\nI0129 10:59:41.798050 6142 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 10:59:41.798234 6142 services_controller.go:356] Processing sync for service openshift-console-operator/metrics for network=default\\\\nF0129 10:59:41.798242 6142 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.688118 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.702455 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.706218 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.706254 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.706264 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.706280 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.706291 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:05Z","lastTransitionTime":"2026-01-29T11:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.718593 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.730454 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.744715 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.756828 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.767958 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.806487 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.808760 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.808800 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.808812 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.808837 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.808850 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:05Z","lastTransitionTime":"2026-01-29T11:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.835619 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:41Z\\\",\\\"message\\\":\\\"rafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 10:59:41.796501 6142 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:41.794423 6142 services_controller.go:360] Finished syncing service downloads on namespace openshift-console for network=default : 2.278542ms\\\\nI0129 10:59:41.798050 6142 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 10:59:41.798234 6142 services_controller.go:356] Processing sync for service openshift-console-operator/metrics for network=default\\\\nF0129 10:59:41.798242 6142 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.851649 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.869589 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.883561 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.900492 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.911424 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.911463 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.911473 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.911489 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.911502 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:05Z","lastTransitionTime":"2026-01-29T11:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.922167 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.939332 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.962548 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.987327 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.004847 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.013533 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.013582 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.013592 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.013607 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.013618 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:06Z","lastTransitionTime":"2026-01-29T11:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.024002 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:00:04Z\\\",\\\"message\\\":\\\"2026-01-29T10:59:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_54d2f8d5-9d8a-4529-b4b4-e1c8695c8441\\\\n2026-01-29T10:59:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_54d2f8d5-9d8a-4529-b4b4-e1c8695c8441 to /host/opt/cni/bin/\\\\n2026-01-29T10:59:19Z [verbose] multus-daemon started\\\\n2026-01-29T10:59:19Z [verbose] Readiness Indicator file check\\\\n2026-01-29T11:00:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.041347 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.053450 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5026ebda-6390-490e-bdda-0f9a1de13f06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://770eac823720571be84970ca91371624bf9a1ef60d4c0ea4dc0011cb1319aa18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06898c0c80943cfb41dfb8b2f126694ec289f605b86e24c7df0bf68a15c4ee7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://100535b62a75f14594466d97f789106e9a51f35605ef3250a2b2e067568e6d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.070936 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.073921 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:06 crc kubenswrapper[4593]: E0129 11:00:06.074092 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.073929 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:06 crc kubenswrapper[4593]: E0129 11:00:06.074312 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.073927 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:06 crc kubenswrapper[4593]: E0129 11:00:06.074506 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.073979 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:06 crc kubenswrapper[4593]: E0129 11:00:06.074718 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.082841 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.088838 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 14:52:38.529229672 +0000 UTC Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.091880 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.115421 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.115451 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.115459 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.115474 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.115482 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:06Z","lastTransitionTime":"2026-01-29T11:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.219383 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.219413 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.219423 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.219437 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.219450 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:06Z","lastTransitionTime":"2026-01-29T11:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.321519 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.321556 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.321569 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.321585 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.321597 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:06Z","lastTransitionTime":"2026-01-29T11:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.424032 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.424069 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.424079 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.424122 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.424133 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:06Z","lastTransitionTime":"2026-01-29T11:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.488198 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovnkube-controller/3.log" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.488846 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovnkube-controller/2.log" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.491193 4593 generic.go:334] "Generic (PLEG): container finished" podID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerID="faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27" exitCode=1 Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.491232 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerDied","Data":"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27"} Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.491277 4593 scope.go:117] "RemoveContainer" containerID="b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.492071 4593 scope.go:117] "RemoveContainer" containerID="faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27" Jan 29 11:00:06 crc kubenswrapper[4593]: E0129 11:00:06.492242 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.494189 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xpt4q_c76afd0b-36c6-4faa-9278-c08c60c483e9/kube-multus/0.log" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.494287 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xpt4q" event={"ID":"c76afd0b-36c6-4faa-9278-c08c60c483e9","Type":"ContainerStarted","Data":"ac51835cf1f007b8725bb86c71b27b6fbe4bdd808b94072ef83e847d22d1f117"} Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.504654 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.518094 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.526434 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.526613 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.526722 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.526805 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.526909 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:06Z","lastTransitionTime":"2026-01-29T11:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.537602 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:41Z\\\",\\\"message\\\":\\\"rafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 10:59:41.796501 6142 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:41.794423 6142 services_controller.go:360] Finished syncing service downloads on namespace openshift-console for network=default : 2.278542ms\\\\nI0129 10:59:41.798050 6142 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 10:59:41.798234 6142 services_controller.go:356] Processing sync for service openshift-console-operator/metrics for network=default\\\\nF0129 10:59:41.798242 6142 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:00:06Z\\\",\\\"message\\\":\\\"hift-kube-controller-manager-operator/metrics]} name:Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.219:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3ec9f67e-7758-4707-a6d0-2dc28f28ac37}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 11:00:06.198924 6472 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager-operator/metrics]} name:Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.219:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3ec9f67e-7758-4707-a6d0-2dc28f28ac37}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 11:00:06.198955 6472 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.549240 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.561947 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.577294 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.587838 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.597237 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.608570 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5026ebda-6390-490e-bdda-0f9a1de13f06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://770eac823720571be84970ca91371624bf9a1ef60d4c0ea4dc0011cb1319aa18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06898c0c80943cfb41dfb8b2f126694ec289f605b86e24c7df0bf68a15c4ee7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://100535b62a75f14594466d97f789106e9a51f35605ef3250a2b2e067568e6d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.622698 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.628759 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.628794 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.628805 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.628821 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.628831 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:06Z","lastTransitionTime":"2026-01-29T11:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.636076 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.646521 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.659318 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:00:04Z\\\",\\\"message\\\":\\\"2026-01-29T10:59:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_54d2f8d5-9d8a-4529-b4b4-e1c8695c8441\\\\n2026-01-29T10:59:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_54d2f8d5-9d8a-4529-b4b4-e1c8695c8441 to /host/opt/cni/bin/\\\\n2026-01-29T10:59:19Z [verbose] multus-daemon started\\\\n2026-01-29T10:59:19Z [verbose] Readiness Indicator file check\\\\n2026-01-29T11:00:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.669109 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.681724 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.692403 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.705102 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.718897 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.731159 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.731194 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.731205 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.731222 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.731233 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:06Z","lastTransitionTime":"2026-01-29T11:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.733825 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.750540 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:41Z\\\",\\\"message\\\":\\\"rafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 10:59:41.796501 6142 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:41.794423 6142 services_controller.go:360] Finished syncing service downloads on namespace openshift-console for network=default : 2.278542ms\\\\nI0129 10:59:41.798050 6142 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 10:59:41.798234 6142 services_controller.go:356] Processing sync for service openshift-console-operator/metrics for network=default\\\\nF0129 10:59:41.798242 6142 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:00:06Z\\\",\\\"message\\\":\\\"hift-kube-controller-manager-operator/metrics]} name:Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.219:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3ec9f67e-7758-4707-a6d0-2dc28f28ac37}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 11:00:06.198924 6472 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager-operator/metrics]} name:Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.219:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3ec9f67e-7758-4707-a6d0-2dc28f28ac37}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 11:00:06.198955 6472 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.760690 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.774440 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.787129 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.799457 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.810795 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.820985 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.834283 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.834487 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.834549 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.834612 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.834684 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:06Z","lastTransitionTime":"2026-01-29T11:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.845285 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51835cf1f007b8725bb86c71b27b6fbe4bdd808b94072ef83e847d22d1f117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:00:04Z\\\",\\\"message\\\":\\\"2026-01-29T10:59:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_54d2f8d5-9d8a-4529-b4b4-e1c8695c8441\\\\n2026-01-29T10:59:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_54d2f8d5-9d8a-4529-b4b4-e1c8695c8441 to /host/opt/cni/bin/\\\\n2026-01-29T10:59:19Z [verbose] multus-daemon started\\\\n2026-01-29T10:59:19Z [verbose] Readiness Indicator file check\\\\n2026-01-29T11:00:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.871284 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.905656 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5026ebda-6390-490e-bdda-0f9a1de13f06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://770eac823720571be84970ca91371624bf9a1ef60d4c0ea4dc0011cb1319aa18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06898c0c80943cfb41dfb8b2f126694ec289f605b86e24c7df0bf68a15c4ee7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://100535b62a75f14594466d97f789106e9a51f35605ef3250a2b2e067568e6d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.919591 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.933554 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.937032 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.937060 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.937070 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.937085 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.937096 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:06Z","lastTransitionTime":"2026-01-29T11:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.944446 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.953284 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.962347 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.039563 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.039592 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.039601 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.039614 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.039622 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:07Z","lastTransitionTime":"2026-01-29T11:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.089585 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 02:07:27.785932388 +0000 UTC Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.141763 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.141984 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.142110 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.142177 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.142233 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:07Z","lastTransitionTime":"2026-01-29T11:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.244446 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.244757 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.244960 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.245120 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.245290 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:07Z","lastTransitionTime":"2026-01-29T11:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.347048 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.347283 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.347369 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.347430 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.347491 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:07Z","lastTransitionTime":"2026-01-29T11:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.449856 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.450055 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.450158 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.450250 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.450332 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:07Z","lastTransitionTime":"2026-01-29T11:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.499232 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovnkube-controller/3.log" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.502389 4593 scope.go:117] "RemoveContainer" containerID="faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27" Jan 29 11:00:07 crc kubenswrapper[4593]: E0129 11:00:07.502624 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.514042 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.525720 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.535002 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.546101 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51835cf1f007b8725bb86c71b27b6fbe4bdd808b94072ef83e847d22d1f117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:00:04Z\\\",\\\"message\\\":\\\"2026-01-29T10:59:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_54d2f8d5-9d8a-4529-b4b4-e1c8695c8441\\\\n2026-01-29T10:59:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_54d2f8d5-9d8a-4529-b4b4-e1c8695c8441 to /host/opt/cni/bin/\\\\n2026-01-29T10:59:19Z [verbose] multus-daemon started\\\\n2026-01-29T10:59:19Z [verbose] Readiness Indicator file check\\\\n2026-01-29T11:00:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.552427 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.552465 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.552478 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.552491 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.552500 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:07Z","lastTransitionTime":"2026-01-29T11:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.556835 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.572501 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5026ebda-6390-490e-bdda-0f9a1de13f06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://770eac823720571be84970ca91371624bf9a1ef60d4c0ea4dc0011cb1319aa18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06898c0c80943cfb41dfb8b2f126694ec289f605b86e24c7df0bf68a15c4ee7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://100535b62a75f14594466d97f789106e9a51f35605ef3250a2b2e067568e6d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.584716 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.594812 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.604851 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.618659 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.640994 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:00:06Z\\\",\\\"message\\\":\\\"hift-kube-controller-manager-operator/metrics]} name:Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.219:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3ec9f67e-7758-4707-a6d0-2dc28f28ac37}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 11:00:06.198924 6472 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager-operator/metrics]} name:Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.219:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3ec9f67e-7758-4707-a6d0-2dc28f28ac37}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 11:00:06.198955 6472 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:00:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.652702 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.656673 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.656712 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.656726 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.656747 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.656762 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:07Z","lastTransitionTime":"2026-01-29T11:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.668008 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.682952 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.695951 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.709328 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.720720 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.758259 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.758286 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.758296 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.758309 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.758319 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:07Z","lastTransitionTime":"2026-01-29T11:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.860912 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.860947 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.860956 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.860969 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.860979 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:07Z","lastTransitionTime":"2026-01-29T11:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.963373 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.963402 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.963410 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.963422 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.963431 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:07Z","lastTransitionTime":"2026-01-29T11:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.066424 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.066453 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.066462 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.066475 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.066484 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:08Z","lastTransitionTime":"2026-01-29T11:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.074286 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.074318 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.074286 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:08 crc kubenswrapper[4593]: E0129 11:00:08.074373 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.074393 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:08 crc kubenswrapper[4593]: E0129 11:00:08.074465 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:08 crc kubenswrapper[4593]: E0129 11:00:08.074511 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:08 crc kubenswrapper[4593]: E0129 11:00:08.074555 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.089873 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 01:41:37.847840364 +0000 UTC Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.168757 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.168786 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.168794 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.168806 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.168815 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:08Z","lastTransitionTime":"2026-01-29T11:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.270662 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.270697 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.270714 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.270734 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.270749 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:08Z","lastTransitionTime":"2026-01-29T11:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.372986 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.373025 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.373038 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.373055 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.373067 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:08Z","lastTransitionTime":"2026-01-29T11:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.475080 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.475130 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.475145 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.475164 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.475178 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:08Z","lastTransitionTime":"2026-01-29T11:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.577667 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.577707 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.577717 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.577734 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.577745 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:08Z","lastTransitionTime":"2026-01-29T11:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.680268 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.680304 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.680314 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.680329 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.680340 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:08Z","lastTransitionTime":"2026-01-29T11:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.782975 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.783032 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.783044 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.783059 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.783071 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:08Z","lastTransitionTime":"2026-01-29T11:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.885090 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.885128 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.885140 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.885156 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.885168 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:08Z","lastTransitionTime":"2026-01-29T11:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.987327 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.987356 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.987364 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.987408 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.987417 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:08Z","lastTransitionTime":"2026-01-29T11:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.089200 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.089248 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.089259 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.089275 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.089286 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:09Z","lastTransitionTime":"2026-01-29T11:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.090292 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 06:17:34.892876041 +0000 UTC Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.192024 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.192065 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.192076 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.192091 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.192101 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:09Z","lastTransitionTime":"2026-01-29T11:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.294422 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.294455 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.294465 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.294481 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.294490 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:09Z","lastTransitionTime":"2026-01-29T11:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.396722 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.396761 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.396774 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.396804 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.396816 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:09Z","lastTransitionTime":"2026-01-29T11:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.499161 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.499195 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.499204 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.499216 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.499225 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:09Z","lastTransitionTime":"2026-01-29T11:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.602224 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.602252 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.602260 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.602275 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.602284 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:09Z","lastTransitionTime":"2026-01-29T11:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.704726 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.704765 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.704776 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.704792 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.704805 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:09Z","lastTransitionTime":"2026-01-29T11:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.807258 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.807307 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.807322 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.807347 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.807364 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:09Z","lastTransitionTime":"2026-01-29T11:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.909293 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.909328 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.909338 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.909353 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.909363 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:09Z","lastTransitionTime":"2026-01-29T11:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.013166 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.013223 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.013237 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.013258 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.013278 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:10Z","lastTransitionTime":"2026-01-29T11:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.074739 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:10 crc kubenswrapper[4593]: E0129 11:00:10.075102 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.074820 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:10 crc kubenswrapper[4593]: E0129 11:00:10.075330 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.074836 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:10 crc kubenswrapper[4593]: E0129 11:00:10.075536 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.074778 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:10 crc kubenswrapper[4593]: E0129 11:00:10.075754 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.090863 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 21:51:00.299880052 +0000 UTC Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.116707 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.117033 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.117191 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.117354 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.117475 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:10Z","lastTransitionTime":"2026-01-29T11:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.219624 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.219893 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.219954 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.220011 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.220078 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:10Z","lastTransitionTime":"2026-01-29T11:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.322592 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.322692 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.322711 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.322737 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.322754 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:10Z","lastTransitionTime":"2026-01-29T11:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.425138 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.425171 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.425181 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.425196 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.425206 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:10Z","lastTransitionTime":"2026-01-29T11:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.527332 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.527898 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.528238 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.528437 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.528594 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:10Z","lastTransitionTime":"2026-01-29T11:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.631046 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.631321 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.631414 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.631495 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.631569 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:10Z","lastTransitionTime":"2026-01-29T11:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.734342 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.734401 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.734419 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.734446 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.734464 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:10Z","lastTransitionTime":"2026-01-29T11:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.837072 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.837100 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.837108 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.837120 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.837130 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:10Z","lastTransitionTime":"2026-01-29T11:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.939505 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.939739 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.939943 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.940115 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.940269 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:10Z","lastTransitionTime":"2026-01-29T11:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.042707 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.042770 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.042784 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.042802 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.042815 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:11Z","lastTransitionTime":"2026-01-29T11:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.091417 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 04:08:43.859774815 +0000 UTC Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.144688 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.144931 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.145028 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.145117 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.145236 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:11Z","lastTransitionTime":"2026-01-29T11:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.247235 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.247305 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.247330 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.247359 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.247381 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:11Z","lastTransitionTime":"2026-01-29T11:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.349935 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.349970 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.349981 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.349999 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.350011 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:11Z","lastTransitionTime":"2026-01-29T11:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.451823 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.451877 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.451886 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.451902 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.451913 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:11Z","lastTransitionTime":"2026-01-29T11:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.553553 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.553607 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.553622 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.553665 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.553678 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:11Z","lastTransitionTime":"2026-01-29T11:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.655986 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.656033 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.656042 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.656058 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.656068 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:11Z","lastTransitionTime":"2026-01-29T11:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.758323 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.758359 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.758368 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.758382 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.758394 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:11Z","lastTransitionTime":"2026-01-29T11:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.860609 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.860908 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.860970 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.861035 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.861095 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:11Z","lastTransitionTime":"2026-01-29T11:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.963181 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.963221 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.963231 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.963247 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.963258 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:11Z","lastTransitionTime":"2026-01-29T11:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.065273 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.065305 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.065317 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.065339 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.065352 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:12Z","lastTransitionTime":"2026-01-29T11:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.074474 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.074514 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.074487 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:12 crc kubenswrapper[4593]: E0129 11:00:12.074567 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.074484 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:12 crc kubenswrapper[4593]: E0129 11:00:12.074707 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:12 crc kubenswrapper[4593]: E0129 11:00:12.074750 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:12 crc kubenswrapper[4593]: E0129 11:00:12.074860 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.092938 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 06:52:09.168680573 +0000 UTC Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.166911 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.166967 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.166983 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.167005 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.167019 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:12Z","lastTransitionTime":"2026-01-29T11:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.269813 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.269864 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.269876 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.269896 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.269908 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:12Z","lastTransitionTime":"2026-01-29T11:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.372397 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.372432 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.372442 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.372459 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.372472 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:12Z","lastTransitionTime":"2026-01-29T11:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.475142 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.475193 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.475202 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.475216 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.475226 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:12Z","lastTransitionTime":"2026-01-29T11:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.577619 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.577896 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.577965 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.578028 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.578110 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:12Z","lastTransitionTime":"2026-01-29T11:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.680593 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.680643 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.680662 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.680679 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.680690 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:12Z","lastTransitionTime":"2026-01-29T11:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.783368 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.783404 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.783415 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.783430 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.783441 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:12Z","lastTransitionTime":"2026-01-29T11:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.885700 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.885732 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.885741 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.885773 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.885782 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:12Z","lastTransitionTime":"2026-01-29T11:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.989617 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.989724 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.989747 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.989778 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.989799 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:12Z","lastTransitionTime":"2026-01-29T11:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.092759 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.092809 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.092822 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.092839 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.092855 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:13Z","lastTransitionTime":"2026-01-29T11:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.093015 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 12:12:08.067495387 +0000 UTC Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.195825 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.195889 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.195912 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.195942 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.195964 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:13Z","lastTransitionTime":"2026-01-29T11:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.298165 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.298214 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.298228 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.298248 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.298264 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:13Z","lastTransitionTime":"2026-01-29T11:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.400271 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.400314 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.400328 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.400344 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.400354 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:13Z","lastTransitionTime":"2026-01-29T11:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.503767 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.503806 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.503816 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.503832 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.503843 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:13Z","lastTransitionTime":"2026-01-29T11:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.521388 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.521434 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.521446 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.521461 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.521471 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:13Z","lastTransitionTime":"2026-01-29T11:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:13 crc kubenswrapper[4593]: E0129 11:00:13.536413 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:13Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.540387 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.540452 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.540467 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.540484 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.540499 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:13Z","lastTransitionTime":"2026-01-29T11:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:13 crc kubenswrapper[4593]: E0129 11:00:13.555997 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:13Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.559258 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.559295 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.559304 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.559337 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.559347 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:13Z","lastTransitionTime":"2026-01-29T11:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:13 crc kubenswrapper[4593]: E0129 11:00:13.571504 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:13Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.575332 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.575380 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.575391 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.575407 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.575418 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:13Z","lastTransitionTime":"2026-01-29T11:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:13 crc kubenswrapper[4593]: E0129 11:00:13.588047 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:13Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.591670 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.591699 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.591708 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.591722 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.591731 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:13Z","lastTransitionTime":"2026-01-29T11:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:13 crc kubenswrapper[4593]: E0129 11:00:13.603489 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:13Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:13 crc kubenswrapper[4593]: E0129 11:00:13.603601 4593 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.605778 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.605817 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.605829 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.605847 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.605858 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:13Z","lastTransitionTime":"2026-01-29T11:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.708792 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.708835 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.708848 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.708865 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.708876 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:13Z","lastTransitionTime":"2026-01-29T11:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.811519 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.811585 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.811598 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.811614 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.811624 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:13Z","lastTransitionTime":"2026-01-29T11:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.914107 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.914141 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.914149 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.914162 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.914171 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:13Z","lastTransitionTime":"2026-01-29T11:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.016429 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.016478 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.016489 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.016506 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.016516 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:14Z","lastTransitionTime":"2026-01-29T11:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.074921 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.074964 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.074925 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.074921 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:14 crc kubenswrapper[4593]: E0129 11:00:14.075029 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:14 crc kubenswrapper[4593]: E0129 11:00:14.075101 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:14 crc kubenswrapper[4593]: E0129 11:00:14.075181 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:14 crc kubenswrapper[4593]: E0129 11:00:14.075268 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.093143 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 11:39:59.797656939 +0000 UTC Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.119137 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.119186 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.119203 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.119222 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.119237 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:14Z","lastTransitionTime":"2026-01-29T11:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.221731 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.221768 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.221780 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.221796 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.221810 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:14Z","lastTransitionTime":"2026-01-29T11:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.323574 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.323611 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.323619 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.323650 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.323659 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:14Z","lastTransitionTime":"2026-01-29T11:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.425827 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.425862 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.425870 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.425886 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.425895 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:14Z","lastTransitionTime":"2026-01-29T11:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.528599 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.528650 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.528660 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.528675 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.528685 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:14Z","lastTransitionTime":"2026-01-29T11:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.630403 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.630433 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.630441 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.630453 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.630461 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:14Z","lastTransitionTime":"2026-01-29T11:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.732096 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.732122 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.732131 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.732144 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.732153 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:14Z","lastTransitionTime":"2026-01-29T11:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.834947 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.835013 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.835024 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.835041 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.835053 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:14Z","lastTransitionTime":"2026-01-29T11:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.937095 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.937124 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.937132 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.937145 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.937154 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:14Z","lastTransitionTime":"2026-01-29T11:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.039104 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.039143 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.039151 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.039165 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.039173 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:15Z","lastTransitionTime":"2026-01-29T11:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.088131 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.093898 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 09:44:28.240705258 +0000 UTC Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.099034 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.108069 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.120791 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.136529 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.141245 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.141318 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.141331 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.141350 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.141393 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:15Z","lastTransitionTime":"2026-01-29T11:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.157858 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:00:06Z\\\",\\\"message\\\":\\\"hift-kube-controller-manager-operator/metrics]} name:Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.219:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3ec9f67e-7758-4707-a6d0-2dc28f28ac37}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 11:00:06.198924 6472 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager-operator/metrics]} name:Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.219:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3ec9f67e-7758-4707-a6d0-2dc28f28ac37}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 11:00:06.198955 6472 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:00:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.168834 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.192400 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.207363 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.218869 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.260270 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.260314 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.260324 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.260340 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.260350 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:15Z","lastTransitionTime":"2026-01-29T11:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.261543 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.275023 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.288214 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51835cf1f007b8725bb86c71b27b6fbe4bdd808b94072ef83e847d22d1f117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:00:04Z\\\",\\\"message\\\":\\\"2026-01-29T10:59:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_54d2f8d5-9d8a-4529-b4b4-e1c8695c8441\\\\n2026-01-29T10:59:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_54d2f8d5-9d8a-4529-b4b4-e1c8695c8441 to /host/opt/cni/bin/\\\\n2026-01-29T10:59:19Z [verbose] multus-daemon started\\\\n2026-01-29T10:59:19Z [verbose] Readiness Indicator file check\\\\n2026-01-29T11:00:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.298971 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.309685 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5026ebda-6390-490e-bdda-0f9a1de13f06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://770eac823720571be84970ca91371624bf9a1ef60d4c0ea4dc0011cb1319aa18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06898c0c80943cfb41dfb8b2f126694ec289f605b86e24c7df0bf68a15c4ee7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://100535b62a75f14594466d97f789106e9a51f35605ef3250a2b2e067568e6d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.323137 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.335541 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.362461 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.362615 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.362654 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.362668 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.362677 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:15Z","lastTransitionTime":"2026-01-29T11:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.465442 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.465476 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.465486 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.465500 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.465510 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:15Z","lastTransitionTime":"2026-01-29T11:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.567950 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.568000 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.568013 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.568032 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.568045 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:15Z","lastTransitionTime":"2026-01-29T11:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.670523 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.670568 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.670579 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.670595 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.670608 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:15Z","lastTransitionTime":"2026-01-29T11:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.772979 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.773011 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.773019 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.773031 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.773041 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:15Z","lastTransitionTime":"2026-01-29T11:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.875541 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.875584 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.875595 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.875614 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.875625 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:15Z","lastTransitionTime":"2026-01-29T11:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.978304 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.978347 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.978364 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.978383 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.978395 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:15Z","lastTransitionTime":"2026-01-29T11:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.074128 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.074154 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.074189 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:16 crc kubenswrapper[4593]: E0129 11:00:16.074252 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.074189 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:16 crc kubenswrapper[4593]: E0129 11:00:16.074323 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:16 crc kubenswrapper[4593]: E0129 11:00:16.074436 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:16 crc kubenswrapper[4593]: E0129 11:00:16.074523 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.081132 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.081164 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.081192 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.081208 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.081219 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:16Z","lastTransitionTime":"2026-01-29T11:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.094007 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 00:04:18.842722269 +0000 UTC Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.183967 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.184006 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.184014 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.184031 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.184040 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:16Z","lastTransitionTime":"2026-01-29T11:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.287038 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.287091 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.287104 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.287126 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.287139 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:16Z","lastTransitionTime":"2026-01-29T11:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.390131 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.390173 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.390184 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.390201 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.390214 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:16Z","lastTransitionTime":"2026-01-29T11:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.493196 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.493262 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.493277 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.493300 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.493314 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:16Z","lastTransitionTime":"2026-01-29T11:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.596506 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.596566 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.596580 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.596599 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.596613 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:16Z","lastTransitionTime":"2026-01-29T11:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.698970 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.699004 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.699014 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.699029 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.699040 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:16Z","lastTransitionTime":"2026-01-29T11:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.801466 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.801506 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.801515 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.801528 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.801537 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:16Z","lastTransitionTime":"2026-01-29T11:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.904320 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.904363 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.904374 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.904390 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.904400 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:16Z","lastTransitionTime":"2026-01-29T11:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.007042 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.007072 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.007080 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.007094 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.007104 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:17Z","lastTransitionTime":"2026-01-29T11:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.094484 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 20:30:33.623306195 +0000 UTC Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.109308 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.109354 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.109363 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.109376 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.109385 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:17Z","lastTransitionTime":"2026-01-29T11:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.212601 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.212682 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.212695 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.212716 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.212727 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:17Z","lastTransitionTime":"2026-01-29T11:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.315112 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.315160 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.315170 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.315182 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.315191 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:17Z","lastTransitionTime":"2026-01-29T11:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.417754 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.417788 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.417799 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.417815 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.417827 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:17Z","lastTransitionTime":"2026-01-29T11:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.520647 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.520686 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.520698 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.520714 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.520726 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:17Z","lastTransitionTime":"2026-01-29T11:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.622895 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.622929 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.622940 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.622963 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.622986 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:17Z","lastTransitionTime":"2026-01-29T11:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.725737 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.725770 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.725780 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.725794 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.725807 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:17Z","lastTransitionTime":"2026-01-29T11:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.828447 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.828486 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.828495 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.828511 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.828522 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:17Z","lastTransitionTime":"2026-01-29T11:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.930650 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.930688 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.930696 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.930712 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.930721 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:17Z","lastTransitionTime":"2026-01-29T11:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.033473 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.033504 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.033513 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.033526 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.033536 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:18Z","lastTransitionTime":"2026-01-29T11:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.074355 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.074539 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.074570 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:18 crc kubenswrapper[4593]: E0129 11:00:18.074711 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.074766 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:18 crc kubenswrapper[4593]: E0129 11:00:18.074837 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:18 crc kubenswrapper[4593]: E0129 11:00:18.074908 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:18 crc kubenswrapper[4593]: E0129 11:00:18.075009 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.075454 4593 scope.go:117] "RemoveContainer" containerID="faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27" Jan 29 11:00:18 crc kubenswrapper[4593]: E0129 11:00:18.075596 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.094939 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 01:28:35.189379958 +0000 UTC Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.135964 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.136010 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.136022 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.136039 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.136052 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:18Z","lastTransitionTime":"2026-01-29T11:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.238289 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.238318 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.238327 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.238338 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.238346 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:18Z","lastTransitionTime":"2026-01-29T11:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.340699 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.340740 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.340749 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.340764 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.340775 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:18Z","lastTransitionTime":"2026-01-29T11:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.442858 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.442892 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.442900 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.442912 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.442923 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:18Z","lastTransitionTime":"2026-01-29T11:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.547184 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.547216 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.547226 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.547239 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.547248 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:18Z","lastTransitionTime":"2026-01-29T11:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.649765 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.649804 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.649816 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.649832 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.649843 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:18Z","lastTransitionTime":"2026-01-29T11:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.757274 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.757316 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.757328 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.757345 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.757357 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:18Z","lastTransitionTime":"2026-01-29T11:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.859275 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.859314 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.859322 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.859336 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.859347 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:18Z","lastTransitionTime":"2026-01-29T11:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.961509 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.961856 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.961964 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.962057 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.962151 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:18Z","lastTransitionTime":"2026-01-29T11:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.064231 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.064493 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.064653 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.064787 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.064922 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:19Z","lastTransitionTime":"2026-01-29T11:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.095127 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 06:22:44.950046399 +0000 UTC Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.167171 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.167441 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.167524 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.167610 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.167769 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:19Z","lastTransitionTime":"2026-01-29T11:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.272418 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.272482 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.272517 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.272549 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.272571 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:19Z","lastTransitionTime":"2026-01-29T11:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.375280 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.375351 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.375375 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.375404 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.375428 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:19Z","lastTransitionTime":"2026-01-29T11:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.478383 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.478443 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.478465 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.478492 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.478516 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:19Z","lastTransitionTime":"2026-01-29T11:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.580505 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.580540 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.580550 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.580564 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.580576 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:19Z","lastTransitionTime":"2026-01-29T11:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.696262 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.696306 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.696316 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.696335 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.696348 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:19Z","lastTransitionTime":"2026-01-29T11:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.799007 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.799053 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.799065 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.799085 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.799098 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:19Z","lastTransitionTime":"2026-01-29T11:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.870107 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:00:19 crc kubenswrapper[4593]: E0129 11:00:19.870242 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:23.87022446 +0000 UTC m=+149.743258651 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.870339 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.870389 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.870432 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:19 crc kubenswrapper[4593]: E0129 11:00:19.870471 4593 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 11:00:19 crc kubenswrapper[4593]: E0129 11:00:19.870505 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 11:01:23.870498717 +0000 UTC m=+149.743532908 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 11:00:19 crc kubenswrapper[4593]: E0129 11:00:19.870539 4593 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 11:00:19 crc kubenswrapper[4593]: E0129 11:00:19.870676 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 11:00:19 crc kubenswrapper[4593]: E0129 11:00:19.870705 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 11:01:23.870678093 +0000 UTC m=+149.743712334 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 11:00:19 crc kubenswrapper[4593]: E0129 11:00:19.870715 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 11:00:19 crc kubenswrapper[4593]: E0129 11:00:19.870743 4593 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:00:19 crc kubenswrapper[4593]: E0129 11:00:19.870809 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 11:01:23.870787146 +0000 UTC m=+149.743821377 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.902274 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.902337 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.902358 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.902386 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.902407 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:19Z","lastTransitionTime":"2026-01-29T11:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.971535 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:19 crc kubenswrapper[4593]: E0129 11:00:19.972003 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 11:00:19 crc kubenswrapper[4593]: E0129 11:00:19.972114 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 11:00:19 crc kubenswrapper[4593]: E0129 11:00:19.972206 4593 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:00:19 crc kubenswrapper[4593]: E0129 11:00:19.972340 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 11:01:23.972323928 +0000 UTC m=+149.845358119 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.005939 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.006015 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.006034 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.006058 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.006077 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:20Z","lastTransitionTime":"2026-01-29T11:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.074836 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.074854 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:20 crc kubenswrapper[4593]: E0129 11:00:20.075065 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:20 crc kubenswrapper[4593]: E0129 11:00:20.075176 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.075563 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.075704 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:20 crc kubenswrapper[4593]: E0129 11:00:20.076114 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:20 crc kubenswrapper[4593]: E0129 11:00:20.075957 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.095567 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 16:45:24.807700675 +0000 UTC Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.108437 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.108487 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.108496 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.108518 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.108528 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:20Z","lastTransitionTime":"2026-01-29T11:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.210743 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.210788 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.210802 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.210822 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.210836 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:20Z","lastTransitionTime":"2026-01-29T11:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.313141 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.313178 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.313189 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.313204 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.313214 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:20Z","lastTransitionTime":"2026-01-29T11:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.416564 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.416853 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.416926 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.417229 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.417326 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:20Z","lastTransitionTime":"2026-01-29T11:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.520927 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.521299 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.521375 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.521452 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.521554 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:20Z","lastTransitionTime":"2026-01-29T11:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.624527 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.624592 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.624602 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.624617 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.624626 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:20Z","lastTransitionTime":"2026-01-29T11:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.727193 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.727241 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.727251 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.727269 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.727280 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:20Z","lastTransitionTime":"2026-01-29T11:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.829204 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.829301 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.829315 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.829330 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.829339 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:20Z","lastTransitionTime":"2026-01-29T11:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.931107 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.931138 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.931147 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.931161 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.931172 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:20Z","lastTransitionTime":"2026-01-29T11:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.033074 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.033114 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.033125 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.033140 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.033151 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:21Z","lastTransitionTime":"2026-01-29T11:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.096352 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 10:13:48.295170131 +0000 UTC Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.136066 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.136110 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.136125 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.136142 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.136152 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:21Z","lastTransitionTime":"2026-01-29T11:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.238308 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.238340 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.238349 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.238363 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.238372 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:21Z","lastTransitionTime":"2026-01-29T11:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.340676 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.340719 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.340731 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.340746 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.340758 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:21Z","lastTransitionTime":"2026-01-29T11:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.443453 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.443490 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.443498 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.443514 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.443523 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:21Z","lastTransitionTime":"2026-01-29T11:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.545362 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.545398 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.545408 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.545423 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.545434 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:21Z","lastTransitionTime":"2026-01-29T11:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.647582 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.647627 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.647653 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.647669 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.647680 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:21Z","lastTransitionTime":"2026-01-29T11:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.750199 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.750240 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.750250 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.750265 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.750275 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:21Z","lastTransitionTime":"2026-01-29T11:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.852861 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.852894 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.852904 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.852920 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.852929 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:21Z","lastTransitionTime":"2026-01-29T11:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.954772 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.954818 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.954831 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.954846 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.954858 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:21Z","lastTransitionTime":"2026-01-29T11:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.057384 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.057415 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.057429 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.057452 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.057463 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:22Z","lastTransitionTime":"2026-01-29T11:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.073844 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.073882 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.073851 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:22 crc kubenswrapper[4593]: E0129 11:00:22.073948 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:22 crc kubenswrapper[4593]: E0129 11:00:22.074071 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:22 crc kubenswrapper[4593]: E0129 11:00:22.074174 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.074600 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:22 crc kubenswrapper[4593]: E0129 11:00:22.074687 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.097317 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 17:48:48.042515436 +0000 UTC Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.161053 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.161099 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.161108 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.161124 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.161134 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:22Z","lastTransitionTime":"2026-01-29T11:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.263515 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.263556 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.263566 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.263583 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.263594 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:22Z","lastTransitionTime":"2026-01-29T11:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.366998 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.367083 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.367098 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.367119 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.367131 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:22Z","lastTransitionTime":"2026-01-29T11:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.468862 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.468921 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.468938 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.468961 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.468977 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:22Z","lastTransitionTime":"2026-01-29T11:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.570991 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.571018 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.571026 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.571039 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.571047 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:22Z","lastTransitionTime":"2026-01-29T11:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.673704 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.673758 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.673774 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.673796 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.673814 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:22Z","lastTransitionTime":"2026-01-29T11:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.775481 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.775513 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.775522 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.775536 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.775546 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:22Z","lastTransitionTime":"2026-01-29T11:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.878251 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.878499 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.878590 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.878678 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.878746 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:22Z","lastTransitionTime":"2026-01-29T11:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.981496 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.981537 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.981548 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.981566 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.981577 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:22Z","lastTransitionTime":"2026-01-29T11:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.082900 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.082937 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.082947 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.082967 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.082979 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:23Z","lastTransitionTime":"2026-01-29T11:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.087663 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.097825 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 21:28:19.67738572 +0000 UTC Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.185582 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.185659 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.185673 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.185690 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.185700 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:23Z","lastTransitionTime":"2026-01-29T11:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.288058 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.288139 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.288161 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.288185 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.288203 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:23Z","lastTransitionTime":"2026-01-29T11:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.390756 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.390797 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.390807 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.390823 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.390832 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:23Z","lastTransitionTime":"2026-01-29T11:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.493816 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.493869 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.493888 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.493916 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.493934 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:23Z","lastTransitionTime":"2026-01-29T11:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.597022 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.597095 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.597107 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.597123 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.597135 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:23Z","lastTransitionTime":"2026-01-29T11:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.699518 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.699555 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.699563 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.699577 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.699585 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:23Z","lastTransitionTime":"2026-01-29T11:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.801379 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.801424 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.801434 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.801452 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.801463 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:23Z","lastTransitionTime":"2026-01-29T11:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.903856 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.903904 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.903914 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.903935 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.903946 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:23Z","lastTransitionTime":"2026-01-29T11:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.905894 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.905974 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.905985 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.906028 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.906055 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:23Z","lastTransitionTime":"2026-01-29T11:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:23 crc kubenswrapper[4593]: E0129 11:00:23.918499 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:23Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.921870 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.921903 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.921912 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.921927 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.921936 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:23Z","lastTransitionTime":"2026-01-29T11:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:23 crc kubenswrapper[4593]: E0129 11:00:23.934806 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:23Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.938259 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.938292 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.938302 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.938317 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.938328 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:23Z","lastTransitionTime":"2026-01-29T11:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:23 crc kubenswrapper[4593]: E0129 11:00:23.948543 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:23Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.953276 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.953309 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.953320 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.953336 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.953347 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:23Z","lastTransitionTime":"2026-01-29T11:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:23 crc kubenswrapper[4593]: E0129 11:00:23.965315 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:23Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.968574 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.968607 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.968617 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.968644 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.968653 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:23Z","lastTransitionTime":"2026-01-29T11:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:23 crc kubenswrapper[4593]: E0129 11:00:23.981144 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:23Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:23 crc kubenswrapper[4593]: E0129 11:00:23.981363 4593 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.006439 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.006475 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.006484 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.006497 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.006506 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:24Z","lastTransitionTime":"2026-01-29T11:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.074115 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.074159 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.074135 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.074113 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:24 crc kubenswrapper[4593]: E0129 11:00:24.074272 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:24 crc kubenswrapper[4593]: E0129 11:00:24.074489 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:24 crc kubenswrapper[4593]: E0129 11:00:24.074675 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:24 crc kubenswrapper[4593]: E0129 11:00:24.074783 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.098481 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 01:44:40.607966608 +0000 UTC Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.109441 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.109525 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.109543 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.109563 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.109578 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:24Z","lastTransitionTime":"2026-01-29T11:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.212578 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.212611 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.212621 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.212648 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.212674 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:24Z","lastTransitionTime":"2026-01-29T11:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.315512 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.315546 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.315554 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.315574 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.315583 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:24Z","lastTransitionTime":"2026-01-29T11:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.417779 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.417827 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.417841 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.417868 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.417882 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:24Z","lastTransitionTime":"2026-01-29T11:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.521104 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.521431 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.521566 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.521718 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.521809 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:24Z","lastTransitionTime":"2026-01-29T11:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.624670 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.624710 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.624723 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.624740 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.624752 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:24Z","lastTransitionTime":"2026-01-29T11:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.727210 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.727260 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.727269 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.727284 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.727293 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:24Z","lastTransitionTime":"2026-01-29T11:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.829660 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.829707 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.829720 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.829739 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.829756 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:24Z","lastTransitionTime":"2026-01-29T11:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.936073 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.936462 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.936588 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.936716 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.936888 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:24Z","lastTransitionTime":"2026-01-29T11:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.040313 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.040349 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.040357 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.040371 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.040383 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:25Z","lastTransitionTime":"2026-01-29T11:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.088800 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.088129 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27d4efcc-5516-48f8-b823-410c48349569\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96af555718c85d958e5e6ff04df0c2a39cf2a2d90ed75aa8ce3de1aeccd58ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58235ff8efa3285de647904b309802e9e59de3498d59d86437eae4b9afa2ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58235ff8efa3285de647904b309802e9e59de3498d59d86437eae4b9afa2ad1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.098976 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 01:47:22.273448353 +0000 UTC Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.101092 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.110928 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.120293 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.131916 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.143073 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.143323 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.143423 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.143516 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.143590 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:25Z","lastTransitionTime":"2026-01-29T11:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.146514 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.164354 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:00:06Z\\\",\\\"message\\\":\\\"hift-kube-controller-manager-operator/metrics]} name:Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.219:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3ec9f67e-7758-4707-a6d0-2dc28f28ac37}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 11:00:06.198924 6472 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager-operator/metrics]} name:Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.219:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3ec9f67e-7758-4707-a6d0-2dc28f28ac37}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 11:00:06.198955 6472 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:00:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.178196 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.191813 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.204171 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.216489 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.228174 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.240057 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5026ebda-6390-490e-bdda-0f9a1de13f06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://770eac823720571be84970ca91371624bf9a1ef60d4c0ea4dc0011cb1319aa18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06898c0c80943cfb41dfb8b2f126694ec289f605b86e24c7df0bf68a15c4ee7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://100535b62a75f14594466d97f789106e9a51f35605ef3250a2b2e067568e6d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.248264 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.248320 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.248335 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.248352 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.248362 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:25Z","lastTransitionTime":"2026-01-29T11:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.251755 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.265919 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.276439 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.287387 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51835cf1f007b8725bb86c71b27b6fbe4bdd808b94072ef83e847d22d1f117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:00:04Z\\\",\\\"message\\\":\\\"2026-01-29T10:59:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_54d2f8d5-9d8a-4529-b4b4-e1c8695c8441\\\\n2026-01-29T10:59:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_54d2f8d5-9d8a-4529-b4b4-e1c8695c8441 to /host/opt/cni/bin/\\\\n2026-01-29T10:59:19Z [verbose] multus-daemon started\\\\n2026-01-29T10:59:19Z [verbose] Readiness Indicator file check\\\\n2026-01-29T11:00:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.297136 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.350145 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.350178 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.350188 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.350203 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.350213 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:25Z","lastTransitionTime":"2026-01-29T11:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.452622 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.452746 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.452760 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.452774 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.452784 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:25Z","lastTransitionTime":"2026-01-29T11:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.555157 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.555193 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.555210 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.555227 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.555269 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:25Z","lastTransitionTime":"2026-01-29T11:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.658040 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.658115 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.658128 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.658144 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.658157 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:25Z","lastTransitionTime":"2026-01-29T11:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.761203 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.761234 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.761244 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.761258 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.761267 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:25Z","lastTransitionTime":"2026-01-29T11:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.863818 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.863850 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.863858 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.863872 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.863881 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:25Z","lastTransitionTime":"2026-01-29T11:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.967083 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.967120 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.967163 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.967470 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.967488 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:25Z","lastTransitionTime":"2026-01-29T11:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.070328 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.070393 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.070402 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.070420 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.070435 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:26Z","lastTransitionTime":"2026-01-29T11:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.074689 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.074722 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.074689 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.074863 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:26 crc kubenswrapper[4593]: E0129 11:00:26.074910 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:26 crc kubenswrapper[4593]: E0129 11:00:26.074969 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:26 crc kubenswrapper[4593]: E0129 11:00:26.075046 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:26 crc kubenswrapper[4593]: E0129 11:00:26.075098 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.099725 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 09:21:40.955780956 +0000 UTC Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.173086 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.173123 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.173133 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.173152 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.173166 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:26Z","lastTransitionTime":"2026-01-29T11:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.275903 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.275948 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.275957 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.275972 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.275981 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:26Z","lastTransitionTime":"2026-01-29T11:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.378557 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.378801 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.378868 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.378938 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.379027 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:26Z","lastTransitionTime":"2026-01-29T11:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.481757 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.481797 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.481806 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.481840 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.481859 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:26Z","lastTransitionTime":"2026-01-29T11:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.584227 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.584280 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.584288 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.584301 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.584328 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:26Z","lastTransitionTime":"2026-01-29T11:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.687098 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.687148 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.687160 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.687177 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.687191 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:26Z","lastTransitionTime":"2026-01-29T11:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.789756 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.789792 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.789801 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.789815 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.789828 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:26Z","lastTransitionTime":"2026-01-29T11:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.892274 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.892321 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.892333 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.892352 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.892363 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:26Z","lastTransitionTime":"2026-01-29T11:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.994861 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.994896 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.994907 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.994922 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.994936 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:26Z","lastTransitionTime":"2026-01-29T11:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.097613 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.097673 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.097684 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.097700 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.097711 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:27Z","lastTransitionTime":"2026-01-29T11:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.100747 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 10:20:42.687564196 +0000 UTC Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.200893 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.201297 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.201433 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.201536 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.201614 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:27Z","lastTransitionTime":"2026-01-29T11:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.304462 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.304500 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.304510 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.304524 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.304534 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:27Z","lastTransitionTime":"2026-01-29T11:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.407303 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.407347 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.407379 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.407401 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.407414 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:27Z","lastTransitionTime":"2026-01-29T11:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.509723 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.509764 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.509780 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.509800 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.509812 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:27Z","lastTransitionTime":"2026-01-29T11:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.612578 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.612623 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.612662 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.612679 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.612690 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:27Z","lastTransitionTime":"2026-01-29T11:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.715195 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.715508 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.715575 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.715665 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.715760 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:27Z","lastTransitionTime":"2026-01-29T11:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.818308 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.818348 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.818359 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.818374 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.818387 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:27Z","lastTransitionTime":"2026-01-29T11:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.920759 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.920835 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.920844 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.920858 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.920866 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:27Z","lastTransitionTime":"2026-01-29T11:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.023617 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.023674 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.023685 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.023699 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.023712 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:28Z","lastTransitionTime":"2026-01-29T11:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.074426 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.074374 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.074464 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.074486 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:28 crc kubenswrapper[4593]: E0129 11:00:28.075034 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:28 crc kubenswrapper[4593]: E0129 11:00:28.075113 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:28 crc kubenswrapper[4593]: E0129 11:00:28.075205 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:28 crc kubenswrapper[4593]: E0129 11:00:28.075153 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.102007 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 06:06:31.22182521 +0000 UTC Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.125932 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.125972 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.125982 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.126000 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.126014 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:28Z","lastTransitionTime":"2026-01-29T11:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.228133 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.228162 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.228171 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.228184 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.228193 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:28Z","lastTransitionTime":"2026-01-29T11:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.330312 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.330346 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.330355 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.330370 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.330381 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:28Z","lastTransitionTime":"2026-01-29T11:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.433311 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.433550 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.433652 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.433725 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.433780 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:28Z","lastTransitionTime":"2026-01-29T11:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.536860 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.537124 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.537213 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.537339 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.537427 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:28Z","lastTransitionTime":"2026-01-29T11:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.640185 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.640216 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.640225 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.640239 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.640249 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:28Z","lastTransitionTime":"2026-01-29T11:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.743236 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.743275 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.743289 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.743305 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.743315 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:28Z","lastTransitionTime":"2026-01-29T11:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.845234 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.845295 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.845305 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.845320 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.845331 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:28Z","lastTransitionTime":"2026-01-29T11:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.947483 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.947538 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.947546 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.947561 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.947572 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:28Z","lastTransitionTime":"2026-01-29T11:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.050259 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.050294 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.050303 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.050318 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.050327 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:29Z","lastTransitionTime":"2026-01-29T11:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.075712 4593 scope.go:117] "RemoveContainer" containerID="faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27" Jan 29 11:00:29 crc kubenswrapper[4593]: E0129 11:00:29.075921 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.102609 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 02:41:56.670921189 +0000 UTC Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.152364 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.152409 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.152417 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.152432 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.152442 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:29Z","lastTransitionTime":"2026-01-29T11:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.254570 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.254849 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.254918 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.255002 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.255108 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:29Z","lastTransitionTime":"2026-01-29T11:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.358110 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.358512 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.358577 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.358704 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.358784 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:29Z","lastTransitionTime":"2026-01-29T11:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.460927 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.461149 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.461266 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.461350 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.461430 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:29Z","lastTransitionTime":"2026-01-29T11:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.564186 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.564234 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.564243 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.564261 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.564269 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:29Z","lastTransitionTime":"2026-01-29T11:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.665971 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.666023 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.666033 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.666049 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.666060 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:29Z","lastTransitionTime":"2026-01-29T11:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.768383 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.768414 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.768424 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.768439 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.768451 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:29Z","lastTransitionTime":"2026-01-29T11:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.871648 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.871675 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.871683 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.871696 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.871704 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:29Z","lastTransitionTime":"2026-01-29T11:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.974337 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.974381 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.974397 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.974413 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.974422 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:29Z","lastTransitionTime":"2026-01-29T11:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.074572 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.074617 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.074600 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:30 crc kubenswrapper[4593]: E0129 11:00:30.074721 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.074597 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:30 crc kubenswrapper[4593]: E0129 11:00:30.074807 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:30 crc kubenswrapper[4593]: E0129 11:00:30.074931 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:30 crc kubenswrapper[4593]: E0129 11:00:30.075005 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.075944 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.075965 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.075974 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.075984 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.075993 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:30Z","lastTransitionTime":"2026-01-29T11:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.103186 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 18:09:11.177025138 +0000 UTC Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.177936 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.177991 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.178005 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.178027 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.178042 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:30Z","lastTransitionTime":"2026-01-29T11:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.280007 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.280062 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.280074 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.280094 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.280107 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:30Z","lastTransitionTime":"2026-01-29T11:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.382559 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.382607 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.382620 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.382677 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.382699 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:30Z","lastTransitionTime":"2026-01-29T11:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.485200 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.485249 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.485262 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.485280 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.485293 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:30Z","lastTransitionTime":"2026-01-29T11:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.588558 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.588599 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.588611 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.588626 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.588654 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:30Z","lastTransitionTime":"2026-01-29T11:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.691172 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.691211 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.691221 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.691238 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.691248 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:30Z","lastTransitionTime":"2026-01-29T11:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.793499 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.793569 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.793585 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.793604 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.793615 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:30Z","lastTransitionTime":"2026-01-29T11:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.896088 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.896134 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.896198 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.896224 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.896241 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:30Z","lastTransitionTime":"2026-01-29T11:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.998550 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.998615 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.998624 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.998650 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.998662 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:30Z","lastTransitionTime":"2026-01-29T11:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.100757 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.100796 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.100807 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.100821 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.100832 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:31Z","lastTransitionTime":"2026-01-29T11:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.105692 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 19:42:57.076518253 +0000 UTC Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.204272 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.204487 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.204549 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.204608 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.204692 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:31Z","lastTransitionTime":"2026-01-29T11:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.306752 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.306812 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.306832 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.306860 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.306884 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:31Z","lastTransitionTime":"2026-01-29T11:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.409997 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.410032 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.410040 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.410055 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.410065 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:31Z","lastTransitionTime":"2026-01-29T11:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.511787 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.511894 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.511912 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.511934 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.511950 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:31Z","lastTransitionTime":"2026-01-29T11:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.614051 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.614112 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.614126 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.614141 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.614153 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:31Z","lastTransitionTime":"2026-01-29T11:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.716809 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.716894 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.716910 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.716934 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.716951 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:31Z","lastTransitionTime":"2026-01-29T11:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.819477 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.819545 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.819558 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.819578 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.819593 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:31Z","lastTransitionTime":"2026-01-29T11:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.922506 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.922810 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.922903 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.923037 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.923162 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:31Z","lastTransitionTime":"2026-01-29T11:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.025078 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.025116 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.025127 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.025142 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.025153 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:32Z","lastTransitionTime":"2026-01-29T11:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.074682 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:32 crc kubenswrapper[4593]: E0129 11:00:32.075002 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.074702 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:32 crc kubenswrapper[4593]: E0129 11:00:32.075209 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.074682 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:32 crc kubenswrapper[4593]: E0129 11:00:32.075370 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.074770 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:32 crc kubenswrapper[4593]: E0129 11:00:32.075599 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.108055 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 13:56:36.496167571 +0000 UTC Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.126818 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.126859 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.126870 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.126884 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.126893 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:32Z","lastTransitionTime":"2026-01-29T11:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.228913 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.229205 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.229289 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.229380 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.229475 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:32Z","lastTransitionTime":"2026-01-29T11:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.331789 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.331829 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.331844 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.331865 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.331884 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:32Z","lastTransitionTime":"2026-01-29T11:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.433294 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.433326 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.433336 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.433353 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.433361 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:32Z","lastTransitionTime":"2026-01-29T11:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.535534 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.535571 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.535582 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.535595 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.535604 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:32Z","lastTransitionTime":"2026-01-29T11:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.637964 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.638015 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.638027 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.638044 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.638056 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:32Z","lastTransitionTime":"2026-01-29T11:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.740363 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.740441 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.740452 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.740468 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.740480 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:32Z","lastTransitionTime":"2026-01-29T11:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.842316 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.842407 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.842421 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.842437 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.842447 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:32Z","lastTransitionTime":"2026-01-29T11:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.944269 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.944343 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.944356 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.944380 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.944390 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:32Z","lastTransitionTime":"2026-01-29T11:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.046673 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.046796 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.046817 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.046840 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.046862 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:33Z","lastTransitionTime":"2026-01-29T11:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.108646 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 19:50:45.693748619 +0000 UTC Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.149513 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.149546 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.149555 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.149568 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.149577 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:33Z","lastTransitionTime":"2026-01-29T11:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.251962 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.252011 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.252026 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.252044 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.252056 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:33Z","lastTransitionTime":"2026-01-29T11:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.355060 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.355090 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.355099 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.355114 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.355123 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:33Z","lastTransitionTime":"2026-01-29T11:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.457445 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.457486 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.457497 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.457514 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.457526 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:33Z","lastTransitionTime":"2026-01-29T11:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.559804 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.559845 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.559857 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.559872 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.559883 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:33Z","lastTransitionTime":"2026-01-29T11:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.661517 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.661552 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.661561 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.661574 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.661582 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:33Z","lastTransitionTime":"2026-01-29T11:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.763761 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.763815 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.763837 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.763859 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.763869 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:33Z","lastTransitionTime":"2026-01-29T11:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.867118 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.867406 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.867492 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.867589 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.867807 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:33Z","lastTransitionTime":"2026-01-29T11:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.970855 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.971138 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.971221 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.971326 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.971426 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:33Z","lastTransitionTime":"2026-01-29T11:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.013930 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs\") pod \"network-metrics-daemon-7jm9m\" (UID: \"7d229804-724c-4e21-89ac-e3369b615389\") " pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:34 crc kubenswrapper[4593]: E0129 11:00:34.014110 4593 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 11:00:34 crc kubenswrapper[4593]: E0129 11:00:34.014221 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs podName:7d229804-724c-4e21-89ac-e3369b615389 nodeName:}" failed. No retries permitted until 2026-01-29 11:01:38.01419598 +0000 UTC m=+163.887230251 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs") pod "network-metrics-daemon-7jm9m" (UID: "7d229804-724c-4e21-89ac-e3369b615389") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.073989 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.074057 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.074390 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.074406 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.074429 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.074086 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.074445 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:34Z","lastTransitionTime":"2026-01-29T11:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.074087 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:34 crc kubenswrapper[4593]: E0129 11:00:34.074559 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.074058 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:34 crc kubenswrapper[4593]: E0129 11:00:34.074699 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:34 crc kubenswrapper[4593]: E0129 11:00:34.074351 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:34 crc kubenswrapper[4593]: E0129 11:00:34.074887 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.082360 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.082460 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.082532 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.082598 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.082676 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:34Z","lastTransitionTime":"2026-01-29T11:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.108867 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 04:19:41.479918552 +0000 UTC Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.108953 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.119329 4593 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.123491 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw"] Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.123923 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.127820 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.128140 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.128807 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.129026 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.154526 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podStartSLOduration=78.154503382 podStartE2EDuration="1m18.154503382s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:00:34.143123098 +0000 UTC m=+100.016157289" watchObservedRunningTime="2026-01-29 11:00:34.154503382 +0000 UTC m=+100.027537573" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.215951 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4b66123-cd65-43f4-8c09-ca4b8537e2e8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-g2qsw\" (UID: \"c4b66123-cd65-43f4-8c09-ca4b8537e2e8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.216014 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4b66123-cd65-43f4-8c09-ca4b8537e2e8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-g2qsw\" (UID: \"c4b66123-cd65-43f4-8c09-ca4b8537e2e8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.216038 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c4b66123-cd65-43f4-8c09-ca4b8537e2e8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-g2qsw\" (UID: \"c4b66123-cd65-43f4-8c09-ca4b8537e2e8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.216060 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c4b66123-cd65-43f4-8c09-ca4b8537e2e8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-g2qsw\" (UID: \"c4b66123-cd65-43f4-8c09-ca4b8537e2e8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.216121 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c4b66123-cd65-43f4-8c09-ca4b8537e2e8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-g2qsw\" (UID: \"c4b66123-cd65-43f4-8c09-ca4b8537e2e8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.217459 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-xpt4q" podStartSLOduration=78.217447329 podStartE2EDuration="1m18.217447329s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:00:34.216658218 +0000 UTC m=+100.089692419" watchObservedRunningTime="2026-01-29 11:00:34.217447329 +0000 UTC m=+100.090481530" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.217666 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-mkxdt" podStartSLOduration=78.217660655 podStartE2EDuration="1m18.217660655s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:00:34.202531898 +0000 UTC m=+100.075566089" watchObservedRunningTime="2026-01-29 11:00:34.217660655 +0000 UTC m=+100.090694846" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.226365 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-42qv9" podStartSLOduration=78.226347325 podStartE2EDuration="1m18.226347325s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:00:34.225771889 +0000 UTC m=+100.098806100" watchObservedRunningTime="2026-01-29 11:00:34.226347325 +0000 UTC m=+100.099381516" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.253493 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=43.253478403 podStartE2EDuration="43.253478403s" podCreationTimestamp="2026-01-29 10:59:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:00:34.252846205 +0000 UTC m=+100.125880406" watchObservedRunningTime="2026-01-29 11:00:34.253478403 +0000 UTC m=+100.126512594" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.274726 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=9.274707759 podStartE2EDuration="9.274707759s" podCreationTimestamp="2026-01-29 11:00:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:00:34.274481543 +0000 UTC m=+100.147515764" watchObservedRunningTime="2026-01-29 11:00:34.274707759 +0000 UTC m=+100.147741950" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.291913 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=78.291893433 podStartE2EDuration="1m18.291893433s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:00:34.291450631 +0000 UTC m=+100.164484822" watchObservedRunningTime="2026-01-29 11:00:34.291893433 +0000 UTC m=+100.164927634" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.316872 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c4b66123-cd65-43f4-8c09-ca4b8537e2e8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-g2qsw\" (UID: \"c4b66123-cd65-43f4-8c09-ca4b8537e2e8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.316857 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=11.316838092 podStartE2EDuration="11.316838092s" podCreationTimestamp="2026-01-29 11:00:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:00:34.316809121 +0000 UTC m=+100.189843322" watchObservedRunningTime="2026-01-29 11:00:34.316838092 +0000 UTC m=+100.189872283" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.316927 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c4b66123-cd65-43f4-8c09-ca4b8537e2e8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-g2qsw\" (UID: \"c4b66123-cd65-43f4-8c09-ca4b8537e2e8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.316984 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c4b66123-cd65-43f4-8c09-ca4b8537e2e8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-g2qsw\" (UID: \"c4b66123-cd65-43f4-8c09-ca4b8537e2e8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.317001 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c4b66123-cd65-43f4-8c09-ca4b8537e2e8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-g2qsw\" (UID: \"c4b66123-cd65-43f4-8c09-ca4b8537e2e8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.317061 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c4b66123-cd65-43f4-8c09-ca4b8537e2e8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-g2qsw\" (UID: \"c4b66123-cd65-43f4-8c09-ca4b8537e2e8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.317099 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4b66123-cd65-43f4-8c09-ca4b8537e2e8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-g2qsw\" (UID: \"c4b66123-cd65-43f4-8c09-ca4b8537e2e8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.317149 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4b66123-cd65-43f4-8c09-ca4b8537e2e8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-g2qsw\" (UID: \"c4b66123-cd65-43f4-8c09-ca4b8537e2e8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.317942 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c4b66123-cd65-43f4-8c09-ca4b8537e2e8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-g2qsw\" (UID: \"c4b66123-cd65-43f4-8c09-ca4b8537e2e8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.326282 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4b66123-cd65-43f4-8c09-ca4b8537e2e8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-g2qsw\" (UID: \"c4b66123-cd65-43f4-8c09-ca4b8537e2e8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.340195 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4b66123-cd65-43f4-8c09-ca4b8537e2e8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-g2qsw\" (UID: \"c4b66123-cd65-43f4-8c09-ca4b8537e2e8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.361532 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" podStartSLOduration=78.361484314 podStartE2EDuration="1m18.361484314s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:00:34.350958974 +0000 UTC m=+100.223993165" watchObservedRunningTime="2026-01-29 11:00:34.361484314 +0000 UTC m=+100.234518505" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.373419 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=79.373400473 podStartE2EDuration="1m19.373400473s" podCreationTimestamp="2026-01-29 10:59:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:00:34.373260618 +0000 UTC m=+100.246294809" watchObservedRunningTime="2026-01-29 11:00:34.373400473 +0000 UTC m=+100.246434664" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.423428 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-zk9np" podStartSLOduration=78.423411292 podStartE2EDuration="1m18.423411292s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:00:34.396754537 +0000 UTC m=+100.269788748" watchObservedRunningTime="2026-01-29 11:00:34.423411292 +0000 UTC m=+100.296445483" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.447052 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: W0129 11:00:34.460149 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4b66123_cd65_43f4_8c09_ca4b8537e2e8.slice/crio-6d8c9afc6ac792586d940828f69e7c7f87c62169dc24ea4f9e3c81f77014ef86 WatchSource:0}: Error finding container 6d8c9afc6ac792586d940828f69e7c7f87c62169dc24ea4f9e3c81f77014ef86: Status 404 returned error can't find the container with id 6d8c9afc6ac792586d940828f69e7c7f87c62169dc24ea4f9e3c81f77014ef86 Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.581424 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" event={"ID":"c4b66123-cd65-43f4-8c09-ca4b8537e2e8","Type":"ContainerStarted","Data":"6d8c9afc6ac792586d940828f69e7c7f87c62169dc24ea4f9e3c81f77014ef86"} Jan 29 11:00:35 crc kubenswrapper[4593]: I0129 11:00:35.585307 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" event={"ID":"c4b66123-cd65-43f4-8c09-ca4b8537e2e8","Type":"ContainerStarted","Data":"3288bb11a1c18beee2c5f4b89aca8e57baa50fa7494b4f22575ad2c6ac8b9e5b"} Jan 29 11:00:36 crc kubenswrapper[4593]: I0129 11:00:36.074841 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:36 crc kubenswrapper[4593]: I0129 11:00:36.074904 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:36 crc kubenswrapper[4593]: E0129 11:00:36.075182 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:36 crc kubenswrapper[4593]: E0129 11:00:36.075305 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:36 crc kubenswrapper[4593]: I0129 11:00:36.075371 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:36 crc kubenswrapper[4593]: I0129 11:00:36.075390 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:36 crc kubenswrapper[4593]: E0129 11:00:36.075464 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:36 crc kubenswrapper[4593]: E0129 11:00:36.075522 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:38 crc kubenswrapper[4593]: I0129 11:00:38.074683 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:38 crc kubenswrapper[4593]: I0129 11:00:38.074713 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:38 crc kubenswrapper[4593]: I0129 11:00:38.074714 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:38 crc kubenswrapper[4593]: I0129 11:00:38.074683 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:38 crc kubenswrapper[4593]: E0129 11:00:38.074821 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:38 crc kubenswrapper[4593]: E0129 11:00:38.074952 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:38 crc kubenswrapper[4593]: E0129 11:00:38.075039 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:38 crc kubenswrapper[4593]: E0129 11:00:38.074910 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:40 crc kubenswrapper[4593]: I0129 11:00:40.074140 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:40 crc kubenswrapper[4593]: I0129 11:00:40.074205 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:40 crc kubenswrapper[4593]: I0129 11:00:40.074742 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:40 crc kubenswrapper[4593]: I0129 11:00:40.074860 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:40 crc kubenswrapper[4593]: E0129 11:00:40.074960 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:40 crc kubenswrapper[4593]: E0129 11:00:40.075108 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:40 crc kubenswrapper[4593]: E0129 11:00:40.075197 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:40 crc kubenswrapper[4593]: E0129 11:00:40.075272 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:42 crc kubenswrapper[4593]: I0129 11:00:42.074345 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:42 crc kubenswrapper[4593]: I0129 11:00:42.074414 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:42 crc kubenswrapper[4593]: I0129 11:00:42.074344 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:42 crc kubenswrapper[4593]: I0129 11:00:42.074346 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:42 crc kubenswrapper[4593]: E0129 11:00:42.074485 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:42 crc kubenswrapper[4593]: E0129 11:00:42.074579 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:42 crc kubenswrapper[4593]: E0129 11:00:42.074693 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:42 crc kubenswrapper[4593]: E0129 11:00:42.074757 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:44 crc kubenswrapper[4593]: I0129 11:00:44.074446 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:44 crc kubenswrapper[4593]: I0129 11:00:44.074536 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:44 crc kubenswrapper[4593]: E0129 11:00:44.074582 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:44 crc kubenswrapper[4593]: E0129 11:00:44.074865 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:44 crc kubenswrapper[4593]: I0129 11:00:44.074890 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:44 crc kubenswrapper[4593]: E0129 11:00:44.075350 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:44 crc kubenswrapper[4593]: I0129 11:00:44.075460 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:44 crc kubenswrapper[4593]: I0129 11:00:44.075521 4593 scope.go:117] "RemoveContainer" containerID="faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27" Jan 29 11:00:44 crc kubenswrapper[4593]: E0129 11:00:44.075783 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:44 crc kubenswrapper[4593]: E0129 11:00:44.075812 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" Jan 29 11:00:46 crc kubenswrapper[4593]: I0129 11:00:46.074750 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:46 crc kubenswrapper[4593]: I0129 11:00:46.074764 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:46 crc kubenswrapper[4593]: I0129 11:00:46.074900 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:46 crc kubenswrapper[4593]: E0129 11:00:46.075404 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:46 crc kubenswrapper[4593]: E0129 11:00:46.075068 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:46 crc kubenswrapper[4593]: E0129 11:00:46.075001 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:46 crc kubenswrapper[4593]: I0129 11:00:46.074765 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:46 crc kubenswrapper[4593]: E0129 11:00:46.076229 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:48 crc kubenswrapper[4593]: I0129 11:00:48.073909 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:48 crc kubenswrapper[4593]: E0129 11:00:48.074666 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:48 crc kubenswrapper[4593]: I0129 11:00:48.074003 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:48 crc kubenswrapper[4593]: E0129 11:00:48.074885 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:48 crc kubenswrapper[4593]: I0129 11:00:48.073920 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:48 crc kubenswrapper[4593]: E0129 11:00:48.075116 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:48 crc kubenswrapper[4593]: I0129 11:00:48.074042 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:48 crc kubenswrapper[4593]: E0129 11:00:48.075326 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:50 crc kubenswrapper[4593]: I0129 11:00:50.074585 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:50 crc kubenswrapper[4593]: E0129 11:00:50.075244 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:50 crc kubenswrapper[4593]: I0129 11:00:50.074751 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:50 crc kubenswrapper[4593]: E0129 11:00:50.075506 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:50 crc kubenswrapper[4593]: I0129 11:00:50.074769 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:50 crc kubenswrapper[4593]: E0129 11:00:50.075591 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:50 crc kubenswrapper[4593]: I0129 11:00:50.074657 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:50 crc kubenswrapper[4593]: E0129 11:00:50.075715 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:51 crc kubenswrapper[4593]: I0129 11:00:51.631824 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xpt4q_c76afd0b-36c6-4faa-9278-c08c60c483e9/kube-multus/1.log" Jan 29 11:00:51 crc kubenswrapper[4593]: I0129 11:00:51.632381 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xpt4q_c76afd0b-36c6-4faa-9278-c08c60c483e9/kube-multus/0.log" Jan 29 11:00:51 crc kubenswrapper[4593]: I0129 11:00:51.632431 4593 generic.go:334] "Generic (PLEG): container finished" podID="c76afd0b-36c6-4faa-9278-c08c60c483e9" containerID="ac51835cf1f007b8725bb86c71b27b6fbe4bdd808b94072ef83e847d22d1f117" exitCode=1 Jan 29 11:00:51 crc kubenswrapper[4593]: I0129 11:00:51.632459 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xpt4q" event={"ID":"c76afd0b-36c6-4faa-9278-c08c60c483e9","Type":"ContainerDied","Data":"ac51835cf1f007b8725bb86c71b27b6fbe4bdd808b94072ef83e847d22d1f117"} Jan 29 11:00:51 crc kubenswrapper[4593]: I0129 11:00:51.632492 4593 scope.go:117] "RemoveContainer" containerID="c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08" Jan 29 11:00:51 crc kubenswrapper[4593]: I0129 11:00:51.633366 4593 scope.go:117] "RemoveContainer" containerID="ac51835cf1f007b8725bb86c71b27b6fbe4bdd808b94072ef83e847d22d1f117" Jan 29 11:00:51 crc kubenswrapper[4593]: E0129 11:00:51.634076 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-xpt4q_openshift-multus(c76afd0b-36c6-4faa-9278-c08c60c483e9)\"" pod="openshift-multus/multus-xpt4q" podUID="c76afd0b-36c6-4faa-9278-c08c60c483e9" Jan 29 11:00:51 crc kubenswrapper[4593]: I0129 11:00:51.650622 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" podStartSLOduration=95.650604661 podStartE2EDuration="1m35.650604661s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:00:35.604850283 +0000 UTC m=+101.477884524" watchObservedRunningTime="2026-01-29 11:00:51.650604661 +0000 UTC m=+117.523638862" Jan 29 11:00:52 crc kubenswrapper[4593]: I0129 11:00:52.074251 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:52 crc kubenswrapper[4593]: I0129 11:00:52.074328 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:52 crc kubenswrapper[4593]: E0129 11:00:52.074389 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:52 crc kubenswrapper[4593]: E0129 11:00:52.074451 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:52 crc kubenswrapper[4593]: I0129 11:00:52.074494 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:52 crc kubenswrapper[4593]: I0129 11:00:52.074328 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:52 crc kubenswrapper[4593]: E0129 11:00:52.074533 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:52 crc kubenswrapper[4593]: E0129 11:00:52.074618 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:52 crc kubenswrapper[4593]: I0129 11:00:52.636311 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xpt4q_c76afd0b-36c6-4faa-9278-c08c60c483e9/kube-multus/1.log" Jan 29 11:00:54 crc kubenswrapper[4593]: I0129 11:00:54.074477 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:54 crc kubenswrapper[4593]: E0129 11:00:54.074609 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:54 crc kubenswrapper[4593]: I0129 11:00:54.074725 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:54 crc kubenswrapper[4593]: E0129 11:00:54.074785 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:54 crc kubenswrapper[4593]: I0129 11:00:54.075055 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:54 crc kubenswrapper[4593]: E0129 11:00:54.075108 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:54 crc kubenswrapper[4593]: I0129 11:00:54.075238 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:54 crc kubenswrapper[4593]: E0129 11:00:54.075408 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:55 crc kubenswrapper[4593]: E0129 11:00:55.105972 4593 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 29 11:00:55 crc kubenswrapper[4593]: E0129 11:00:55.186173 4593 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 11:00:56 crc kubenswrapper[4593]: I0129 11:00:56.074815 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:56 crc kubenswrapper[4593]: E0129 11:00:56.074944 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:56 crc kubenswrapper[4593]: I0129 11:00:56.074990 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:56 crc kubenswrapper[4593]: I0129 11:00:56.075007 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:56 crc kubenswrapper[4593]: I0129 11:00:56.075020 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:56 crc kubenswrapper[4593]: E0129 11:00:56.075079 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:56 crc kubenswrapper[4593]: E0129 11:00:56.075467 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:56 crc kubenswrapper[4593]: E0129 11:00:56.075553 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:56 crc kubenswrapper[4593]: I0129 11:00:56.075763 4593 scope.go:117] "RemoveContainer" containerID="faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27" Jan 29 11:00:56 crc kubenswrapper[4593]: I0129 11:00:56.649305 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovnkube-controller/3.log" Jan 29 11:00:56 crc kubenswrapper[4593]: I0129 11:00:56.651475 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerStarted","Data":"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2"} Jan 29 11:00:56 crc kubenswrapper[4593]: I0129 11:00:56.652567 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 11:00:56 crc kubenswrapper[4593]: I0129 11:00:56.922166 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podStartSLOduration=100.922142296 podStartE2EDuration="1m40.922142296s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:00:56.682134114 +0000 UTC m=+122.555168305" watchObservedRunningTime="2026-01-29 11:00:56.922142296 +0000 UTC m=+122.795176507" Jan 29 11:00:56 crc kubenswrapper[4593]: I0129 11:00:56.923895 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7jm9m"] Jan 29 11:00:56 crc kubenswrapper[4593]: I0129 11:00:56.924062 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:56 crc kubenswrapper[4593]: E0129 11:00:56.924196 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:58 crc kubenswrapper[4593]: I0129 11:00:58.074711 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:58 crc kubenswrapper[4593]: I0129 11:00:58.074753 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:58 crc kubenswrapper[4593]: I0129 11:00:58.074753 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:58 crc kubenswrapper[4593]: E0129 11:00:58.074870 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:58 crc kubenswrapper[4593]: E0129 11:00:58.074950 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:58 crc kubenswrapper[4593]: E0129 11:00:58.075011 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:59 crc kubenswrapper[4593]: I0129 11:00:59.074728 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:59 crc kubenswrapper[4593]: E0129 11:00:59.074894 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:01:00 crc kubenswrapper[4593]: I0129 11:01:00.074600 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:01:00 crc kubenswrapper[4593]: I0129 11:01:00.074665 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:01:00 crc kubenswrapper[4593]: I0129 11:01:00.074688 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:01:00 crc kubenswrapper[4593]: E0129 11:01:00.074756 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:01:00 crc kubenswrapper[4593]: E0129 11:01:00.074803 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:01:00 crc kubenswrapper[4593]: E0129 11:01:00.074870 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:01:00 crc kubenswrapper[4593]: E0129 11:01:00.187911 4593 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 11:01:01 crc kubenswrapper[4593]: I0129 11:01:01.074726 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:01:01 crc kubenswrapper[4593]: E0129 11:01:01.074877 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:01:02 crc kubenswrapper[4593]: I0129 11:01:02.074057 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:01:02 crc kubenswrapper[4593]: I0129 11:01:02.074129 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:01:02 crc kubenswrapper[4593]: E0129 11:01:02.074219 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:01:02 crc kubenswrapper[4593]: E0129 11:01:02.074255 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:01:02 crc kubenswrapper[4593]: I0129 11:01:02.074739 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:01:02 crc kubenswrapper[4593]: E0129 11:01:02.074816 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:01:03 crc kubenswrapper[4593]: I0129 11:01:03.075041 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:01:03 crc kubenswrapper[4593]: E0129 11:01:03.075240 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:01:04 crc kubenswrapper[4593]: I0129 11:01:04.074568 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:01:04 crc kubenswrapper[4593]: I0129 11:01:04.074583 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:01:04 crc kubenswrapper[4593]: E0129 11:01:04.074740 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:01:04 crc kubenswrapper[4593]: E0129 11:01:04.074841 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:01:04 crc kubenswrapper[4593]: I0129 11:01:04.075167 4593 scope.go:117] "RemoveContainer" containerID="ac51835cf1f007b8725bb86c71b27b6fbe4bdd808b94072ef83e847d22d1f117" Jan 29 11:01:04 crc kubenswrapper[4593]: I0129 11:01:04.074800 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:01:04 crc kubenswrapper[4593]: E0129 11:01:04.075763 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:01:04 crc kubenswrapper[4593]: I0129 11:01:04.673418 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xpt4q_c76afd0b-36c6-4faa-9278-c08c60c483e9/kube-multus/1.log" Jan 29 11:01:04 crc kubenswrapper[4593]: I0129 11:01:04.673774 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xpt4q" event={"ID":"c76afd0b-36c6-4faa-9278-c08c60c483e9","Type":"ContainerStarted","Data":"7088fbdf7ae2d9a3c27696c6ec34c0f98abb36e3618af2948ac923c1d6097be2"} Jan 29 11:01:05 crc kubenswrapper[4593]: I0129 11:01:05.074417 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:01:05 crc kubenswrapper[4593]: E0129 11:01:05.075827 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:01:05 crc kubenswrapper[4593]: E0129 11:01:05.188317 4593 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 11:01:06 crc kubenswrapper[4593]: I0129 11:01:06.074414 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:01:06 crc kubenswrapper[4593]: I0129 11:01:06.074420 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:01:06 crc kubenswrapper[4593]: E0129 11:01:06.074581 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:01:06 crc kubenswrapper[4593]: E0129 11:01:06.074682 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:01:06 crc kubenswrapper[4593]: I0129 11:01:06.074438 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:01:06 crc kubenswrapper[4593]: E0129 11:01:06.074759 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:01:07 crc kubenswrapper[4593]: I0129 11:01:07.074743 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:01:07 crc kubenswrapper[4593]: E0129 11:01:07.074881 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:01:08 crc kubenswrapper[4593]: I0129 11:01:08.074622 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:01:08 crc kubenswrapper[4593]: I0129 11:01:08.074710 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:01:08 crc kubenswrapper[4593]: E0129 11:01:08.074775 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:01:08 crc kubenswrapper[4593]: E0129 11:01:08.074930 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:01:08 crc kubenswrapper[4593]: I0129 11:01:08.075277 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:01:08 crc kubenswrapper[4593]: E0129 11:01:08.075415 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:01:09 crc kubenswrapper[4593]: I0129 11:01:09.074622 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:01:09 crc kubenswrapper[4593]: E0129 11:01:09.075071 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:01:10 crc kubenswrapper[4593]: I0129 11:01:10.074207 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:01:10 crc kubenswrapper[4593]: I0129 11:01:10.074263 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:01:10 crc kubenswrapper[4593]: E0129 11:01:10.074323 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:01:10 crc kubenswrapper[4593]: E0129 11:01:10.074453 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:01:10 crc kubenswrapper[4593]: I0129 11:01:10.074207 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:01:10 crc kubenswrapper[4593]: E0129 11:01:10.074535 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:01:11 crc kubenswrapper[4593]: I0129 11:01:11.074234 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:01:11 crc kubenswrapper[4593]: I0129 11:01:11.077048 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 11:01:11 crc kubenswrapper[4593]: I0129 11:01:11.077573 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 11:01:12 crc kubenswrapper[4593]: I0129 11:01:12.074419 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:01:12 crc kubenswrapper[4593]: I0129 11:01:12.074527 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:01:12 crc kubenswrapper[4593]: I0129 11:01:12.074600 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:01:12 crc kubenswrapper[4593]: I0129 11:01:12.077344 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 11:01:12 crc kubenswrapper[4593]: I0129 11:01:12.077793 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 11:01:12 crc kubenswrapper[4593]: I0129 11:01:12.077952 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 11:01:12 crc kubenswrapper[4593]: I0129 11:01:12.077990 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.695913 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.734306 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9td98"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.734821 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.735239 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-m9zzn"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.735836 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.738739 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.739252 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.739565 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.739873 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.739971 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.740411 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.741053 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.741097 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.741377 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.741504 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.744140 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.744224 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.745424 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.745451 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.746071 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.750007 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.750351 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.751512 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.751696 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.751720 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.751853 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.751938 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.753048 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.754298 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.754555 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.754674 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.754983 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.754994 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.758075 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.758153 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.759351 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.760776 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.760818 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.762909 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.764824 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.764829 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.768127 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-gl968"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.769009 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.771355 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.783713 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.784776 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.785820 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ftchp"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.786278 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.786793 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.787420 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.787704 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.792989 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.793227 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gz9wd"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.793591 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-t7wn4"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.793850 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fm7cc"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.794110 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-fm7cc" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.794719 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.794944 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-t7wn4" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.795505 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-8425v"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.796009 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vtdww"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.796455 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.796757 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.796894 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.797102 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.797131 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.798086 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.801960 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.802955 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.804754 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.805299 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.808033 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-m9zzn"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.814578 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.814682 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10bf1dd7-30e3-48b9-9651-dcda2f63e89d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-n4s5k\" (UID: \"10bf1dd7-30e3-48b9-9651-dcda2f63e89d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.814719 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.814753 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2fn6\" (UniqueName: \"kubernetes.io/projected/a62104dd-d659-409a-b8f5-85aaf2856a14-kube-api-access-q2fn6\") pod \"route-controller-manager-6576b87f9c-fnv5h\" (UID: \"a62104dd-d659-409a-b8f5-85aaf2856a14\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.814787 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/21b7f343-d887-4bdf-85c0-9639179e9c56-machine-approver-tls\") pod \"machine-approver-56656f9798-gl968\" (UID: \"21b7f343-d887-4bdf-85c0-9639179e9c56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.814891 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-client-ca\") pod \"controller-manager-879f6c89f-9td98\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.814962 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-etcd-serving-ca\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.814998 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.815002 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a62104dd-d659-409a-b8f5-85aaf2856a14-config\") pod \"route-controller-manager-6576b87f9c-fnv5h\" (UID: \"a62104dd-d659-409a-b8f5-85aaf2856a14\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.815034 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-node-pullsecrets\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.815088 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-encryption-config\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.815117 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/43e8598d-f86e-425e-8418-bcfb93e3bd63-available-featuregates\") pod \"openshift-config-operator-7777fb866f-g5zq7\" (UID: \"43e8598d-f86e-425e-8418-bcfb93e3bd63\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.815153 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq7vb\" (UniqueName: \"kubernetes.io/projected/21b7f343-d887-4bdf-85c0-9639179e9c56-kube-api-access-mq7vb\") pod \"machine-approver-56656f9798-gl968\" (UID: \"21b7f343-d887-4bdf-85c0-9639179e9c56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.815183 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q57bg\" (UniqueName: \"kubernetes.io/projected/43e8598d-f86e-425e-8418-bcfb93e3bd63-kube-api-access-q57bg\") pod \"openshift-config-operator-7777fb866f-g5zq7\" (UID: \"43e8598d-f86e-425e-8418-bcfb93e3bd63\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.815247 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-config\") pod \"controller-manager-879f6c89f-9td98\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.815279 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5xjz\" (UniqueName: \"kubernetes.io/projected/10bf1dd7-30e3-48b9-9651-dcda2f63e89d-kube-api-access-r5xjz\") pod \"openshift-apiserver-operator-796bbdcf4f-n4s5k\" (UID: \"10bf1dd7-30e3-48b9-9651-dcda2f63e89d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.815305 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a62104dd-d659-409a-b8f5-85aaf2856a14-serving-cert\") pod \"route-controller-manager-6576b87f9c-fnv5h\" (UID: \"a62104dd-d659-409a-b8f5-85aaf2856a14\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.822796 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d100ddd-343c-48f6-ad0a-e08d3e23a904-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-5gd58\" (UID: \"3d100ddd-343c-48f6-ad0a-e08d3e23a904\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.822876 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d100ddd-343c-48f6-ad0a-e08d3e23a904-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-5gd58\" (UID: \"3d100ddd-343c-48f6-ad0a-e08d3e23a904\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.822905 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21b7f343-d887-4bdf-85c0-9639179e9c56-config\") pod \"machine-approver-56656f9798-gl968\" (UID: \"21b7f343-d887-4bdf-85c0-9639179e9c56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.822926 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76a22425-a78d-4304-b158-f577c6ef4c4f-serving-cert\") pod \"controller-manager-879f6c89f-9td98\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.822949 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-config\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.822965 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-audit-dir\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.822985 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43e8598d-f86e-425e-8418-bcfb93e3bd63-serving-cert\") pod \"openshift-config-operator-7777fb866f-g5zq7\" (UID: \"43e8598d-f86e-425e-8418-bcfb93e3bd63\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.823007 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-image-import-ca\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.823026 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2clt8\" (UniqueName: \"kubernetes.io/projected/3d100ddd-343c-48f6-ad0a-e08d3e23a904-kube-api-access-2clt8\") pod \"openshift-controller-manager-operator-756b6f6bc6-5gd58\" (UID: \"3d100ddd-343c-48f6-ad0a-e08d3e23a904\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.823052 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/21b7f343-d887-4bdf-85c0-9639179e9c56-auth-proxy-config\") pod \"machine-approver-56656f9798-gl968\" (UID: \"21b7f343-d887-4bdf-85c0-9639179e9c56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.823072 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-etcd-client\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.823096 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9td98\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.823120 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-audit\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.823173 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a62104dd-d659-409a-b8f5-85aaf2856a14-client-ca\") pod \"route-controller-manager-6576b87f9c-fnv5h\" (UID: \"a62104dd-d659-409a-b8f5-85aaf2856a14\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.823201 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m95zx\" (UniqueName: \"kubernetes.io/projected/76a22425-a78d-4304-b158-f577c6ef4c4f-kube-api-access-m95zx\") pod \"controller-manager-879f6c89f-9td98\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.823220 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-serving-cert\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.823262 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78txz\" (UniqueName: \"kubernetes.io/projected/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-kube-api-access-78txz\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.823285 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10bf1dd7-30e3-48b9-9651-dcda2f63e89d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-n4s5k\" (UID: \"10bf1dd7-30e3-48b9-9651-dcda2f63e89d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.824533 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.824621 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.826149 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.826556 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.831151 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.831506 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.831737 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.835729 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.836336 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.836611 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.836733 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.836951 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.837435 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.837530 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.837571 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.837724 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.837840 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.837865 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.837939 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838077 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838109 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838123 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838233 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838266 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838292 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838339 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838357 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838530 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838583 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838623 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838718 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838739 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838759 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838801 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838833 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838721 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838834 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838882 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838805 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838721 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838964 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838981 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.839037 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.839047 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.839067 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.839125 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.839134 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.839188 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.839202 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.839247 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.839271 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.839286 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.839303 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.839274 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.839518 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.839870 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.842488 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-g72zl"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.843240 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-j7hr6"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.843354 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.843588 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-l64wd"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.843858 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.844316 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-l64wd" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.855702 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.856254 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.856581 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.857190 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.857716 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.858169 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.885772 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.886297 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-xx52v"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.887456 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.887729 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.888900 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.889528 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf5p2"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.891095 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.891720 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.891996 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.894104 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.920023 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.891722 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.924574 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-encryption-config\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.924647 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/43e8598d-f86e-425e-8418-bcfb93e3bd63-available-featuregates\") pod \"openshift-config-operator-7777fb866f-g5zq7\" (UID: \"43e8598d-f86e-425e-8418-bcfb93e3bd63\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.924680 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q57bg\" (UniqueName: \"kubernetes.io/projected/43e8598d-f86e-425e-8418-bcfb93e3bd63-kube-api-access-q57bg\") pod \"openshift-config-operator-7777fb866f-g5zq7\" (UID: \"43e8598d-f86e-425e-8418-bcfb93e3bd63\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.924714 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq7vb\" (UniqueName: \"kubernetes.io/projected/21b7f343-d887-4bdf-85c0-9639179e9c56-kube-api-access-mq7vb\") pod \"machine-approver-56656f9798-gl968\" (UID: \"21b7f343-d887-4bdf-85c0-9639179e9c56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.924750 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5xjz\" (UniqueName: \"kubernetes.io/projected/10bf1dd7-30e3-48b9-9651-dcda2f63e89d-kube-api-access-r5xjz\") pod \"openshift-apiserver-operator-796bbdcf4f-n4s5k\" (UID: \"10bf1dd7-30e3-48b9-9651-dcda2f63e89d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.924785 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-config\") pod \"controller-manager-879f6c89f-9td98\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.924815 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a62104dd-d659-409a-b8f5-85aaf2856a14-serving-cert\") pod \"route-controller-manager-6576b87f9c-fnv5h\" (UID: \"a62104dd-d659-409a-b8f5-85aaf2856a14\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.924848 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76a22425-a78d-4304-b158-f577c6ef4c4f-serving-cert\") pod \"controller-manager-879f6c89f-9td98\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.924882 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d100ddd-343c-48f6-ad0a-e08d3e23a904-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-5gd58\" (UID: \"3d100ddd-343c-48f6-ad0a-e08d3e23a904\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.924916 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d100ddd-343c-48f6-ad0a-e08d3e23a904-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-5gd58\" (UID: \"3d100ddd-343c-48f6-ad0a-e08d3e23a904\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.924949 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21b7f343-d887-4bdf-85c0-9639179e9c56-config\") pod \"machine-approver-56656f9798-gl968\" (UID: \"21b7f343-d887-4bdf-85c0-9639179e9c56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.924973 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-audit-dir\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925002 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-config\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925029 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43e8598d-f86e-425e-8418-bcfb93e3bd63-serving-cert\") pod \"openshift-config-operator-7777fb866f-g5zq7\" (UID: \"43e8598d-f86e-425e-8418-bcfb93e3bd63\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925057 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-image-import-ca\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925081 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-etcd-client\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925111 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2clt8\" (UniqueName: \"kubernetes.io/projected/3d100ddd-343c-48f6-ad0a-e08d3e23a904-kube-api-access-2clt8\") pod \"openshift-controller-manager-operator-756b6f6bc6-5gd58\" (UID: \"3d100ddd-343c-48f6-ad0a-e08d3e23a904\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925143 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/21b7f343-d887-4bdf-85c0-9639179e9c56-auth-proxy-config\") pod \"machine-approver-56656f9798-gl968\" (UID: \"21b7f343-d887-4bdf-85c0-9639179e9c56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925174 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9td98\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925225 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-audit\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925250 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a62104dd-d659-409a-b8f5-85aaf2856a14-client-ca\") pod \"route-controller-manager-6576b87f9c-fnv5h\" (UID: \"a62104dd-d659-409a-b8f5-85aaf2856a14\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925329 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m95zx\" (UniqueName: \"kubernetes.io/projected/76a22425-a78d-4304-b158-f577c6ef4c4f-kube-api-access-m95zx\") pod \"controller-manager-879f6c89f-9td98\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925358 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-serving-cert\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925397 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78txz\" (UniqueName: \"kubernetes.io/projected/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-kube-api-access-78txz\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925427 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10bf1dd7-30e3-48b9-9651-dcda2f63e89d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-n4s5k\" (UID: \"10bf1dd7-30e3-48b9-9651-dcda2f63e89d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925459 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925495 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10bf1dd7-30e3-48b9-9651-dcda2f63e89d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-n4s5k\" (UID: \"10bf1dd7-30e3-48b9-9651-dcda2f63e89d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925582 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2fn6\" (UniqueName: \"kubernetes.io/projected/a62104dd-d659-409a-b8f5-85aaf2856a14-kube-api-access-q2fn6\") pod \"route-controller-manager-6576b87f9c-fnv5h\" (UID: \"a62104dd-d659-409a-b8f5-85aaf2856a14\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925615 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-etcd-serving-ca\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925943 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.931840 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/21b7f343-d887-4bdf-85c0-9639179e9c56-machine-approver-tls\") pod \"machine-approver-56656f9798-gl968\" (UID: \"21b7f343-d887-4bdf-85c0-9639179e9c56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.931915 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-client-ca\") pod \"controller-manager-879f6c89f-9td98\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.931951 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a62104dd-d659-409a-b8f5-85aaf2856a14-config\") pod \"route-controller-manager-6576b87f9c-fnv5h\" (UID: \"a62104dd-d659-409a-b8f5-85aaf2856a14\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.932127 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-node-pullsecrets\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.932515 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-node-pullsecrets\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.958580 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/21b7f343-d887-4bdf-85c0-9639179e9c56-auth-proxy-config\") pod \"machine-approver-56656f9798-gl968\" (UID: \"21b7f343-d887-4bdf-85c0-9639179e9c56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.959070 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-audit-dir\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.959615 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10bf1dd7-30e3-48b9-9651-dcda2f63e89d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-n4s5k\" (UID: \"10bf1dd7-30e3-48b9-9651-dcda2f63e89d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.959937 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-config\") pod \"controller-manager-879f6c89f-9td98\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.960274 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21b7f343-d887-4bdf-85c0-9639179e9c56-config\") pod \"machine-approver-56656f9798-gl968\" (UID: \"21b7f343-d887-4bdf-85c0-9639179e9c56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.960585 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d100ddd-343c-48f6-ad0a-e08d3e23a904-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-5gd58\" (UID: \"3d100ddd-343c-48f6-ad0a-e08d3e23a904\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.960965 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-config\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.961281 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.961772 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.962366 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-encryption-config\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.962555 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.963236 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.965890 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a62104dd-d659-409a-b8f5-85aaf2856a14-config\") pod \"route-controller-manager-6576b87f9c-fnv5h\" (UID: \"a62104dd-d659-409a-b8f5-85aaf2856a14\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.966923 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.967549 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-client-ca\") pod \"controller-manager-879f6c89f-9td98\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.971261 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a62104dd-d659-409a-b8f5-85aaf2856a14-serving-cert\") pod \"route-controller-manager-6576b87f9c-fnv5h\" (UID: \"a62104dd-d659-409a-b8f5-85aaf2856a14\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.982740 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-image-import-ca\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.983217 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43e8598d-f86e-425e-8418-bcfb93e3bd63-serving-cert\") pod \"openshift-config-operator-7777fb866f-g5zq7\" (UID: \"43e8598d-f86e-425e-8418-bcfb93e3bd63\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.983429 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76a22425-a78d-4304-b158-f577c6ef4c4f-serving-cert\") pod \"controller-manager-879f6c89f-9td98\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.983529 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.983706 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.985398 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9td98\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.985700 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.985978 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8b552"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.986333 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.986349 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.986359 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rnn8b"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.990111 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.993283 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.993315 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.993731 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.994131 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.994414 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.994610 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8b552" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.994701 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hw52m"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.983784 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf5p2" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.995005 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-t7wn4"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.995057 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.995066 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.995126 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-rnn8b" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.995197 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.994129 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.995368 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.995773 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.995946 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.996141 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.996201 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/43e8598d-f86e-425e-8418-bcfb93e3bd63-available-featuregates\") pod \"openshift-config-operator-7777fb866f-g5zq7\" (UID: \"43e8598d-f86e-425e-8418-bcfb93e3bd63\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.994524 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:14.996858 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-audit\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:14.984578 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-etcd-serving-ca\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:14.997481 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a62104dd-d659-409a-b8f5-85aaf2856a14-client-ca\") pod \"route-controller-manager-6576b87f9c-fnv5h\" (UID: \"a62104dd-d659-409a-b8f5-85aaf2856a14\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.001572 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.001614 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.002566 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10bf1dd7-30e3-48b9-9651-dcda2f63e89d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-n4s5k\" (UID: \"10bf1dd7-30e3-48b9-9651-dcda2f63e89d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.003474 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d100ddd-343c-48f6-ad0a-e08d3e23a904-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-5gd58\" (UID: \"3d100ddd-343c-48f6-ad0a-e08d3e23a904\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.004237 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/21b7f343-d887-4bdf-85c0-9639179e9c56-machine-approver-tls\") pod \"machine-approver-56656f9798-gl968\" (UID: \"21b7f343-d887-4bdf-85c0-9639179e9c56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.005521 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-serving-cert\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.007229 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.007596 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.008056 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.008185 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-96whs"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.008883 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-96whs" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.009316 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.010841 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.011363 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.011893 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ftchp"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.013140 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-etcd-client\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.013155 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-8425v"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.016686 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-vbsqg"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.017272 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.017441 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-vbsqg" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.018012 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf5p2"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.019853 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.022448 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.022490 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.024143 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vtdww"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.034068 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.043863 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9td98"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.046186 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.050052 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.051000 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rnn8b"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.052979 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zv27c"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.056118 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-29j27"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.057675 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-j7hr6"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.057740 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-29j27" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.058301 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.063680 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fm7cc"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.076233 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.077990 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.094566 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gz9wd"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.096850 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.099595 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.099877 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.102611 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8b552"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.105313 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.106841 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.108304 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zv27c"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.110343 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-96whs"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.112687 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-l64wd"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.113943 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.115958 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-g72zl"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.116604 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.118171 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hw52m"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.119561 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.121917 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.123963 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-jnw9r"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.124557 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-jnw9r" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.125531 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-29j27"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.126982 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.128152 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.129284 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-jnw9r"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.136904 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.162216 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.180390 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.196962 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.217362 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.238176 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.258674 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.277548 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.301226 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.316727 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.336990 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.357108 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.377173 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.397614 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.417584 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.438292 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.478404 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.497234 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.517597 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.538052 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.557583 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.579040 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.597919 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.617120 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.638414 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.657898 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.678105 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.697666 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.735664 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5xjz\" (UniqueName: \"kubernetes.io/projected/10bf1dd7-30e3-48b9-9651-dcda2f63e89d-kube-api-access-r5xjz\") pod \"openshift-apiserver-operator-796bbdcf4f-n4s5k\" (UID: \"10bf1dd7-30e3-48b9-9651-dcda2f63e89d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.737451 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.777543 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78txz\" (UniqueName: \"kubernetes.io/projected/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-kube-api-access-78txz\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.797422 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m95zx\" (UniqueName: \"kubernetes.io/projected/76a22425-a78d-4304-b158-f577c6ef4c4f-kube-api-access-m95zx\") pod \"controller-manager-879f6c89f-9td98\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.814866 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2fn6\" (UniqueName: \"kubernetes.io/projected/a62104dd-d659-409a-b8f5-85aaf2856a14-kube-api-access-q2fn6\") pod \"route-controller-manager-6576b87f9c-fnv5h\" (UID: \"a62104dd-d659-409a-b8f5-85aaf2856a14\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.817146 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.844183 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.857849 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.894343 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2clt8\" (UniqueName: \"kubernetes.io/projected/3d100ddd-343c-48f6-ad0a-e08d3e23a904-kube-api-access-2clt8\") pod \"openshift-controller-manager-operator-756b6f6bc6-5gd58\" (UID: \"3d100ddd-343c-48f6-ad0a-e08d3e23a904\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.897588 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.917658 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.938136 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.953892 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.958520 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.963525 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.977799 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.977975 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.996265 4593 request.go:700] Waited for 1.000658019s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-sa-dockercfg-5xfcg&limit=500&resourceVersion=0 Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.999094 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.035532 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q57bg\" (UniqueName: \"kubernetes.io/projected/43e8598d-f86e-425e-8418-bcfb93e3bd63-kube-api-access-q57bg\") pod \"openshift-config-operator-7777fb866f-g5zq7\" (UID: \"43e8598d-f86e-425e-8418-bcfb93e3bd63\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.035863 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.038484 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.057301 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.083423 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.096292 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.117313 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq7vb\" (UniqueName: \"kubernetes.io/projected/21b7f343-d887-4bdf-85c0-9639179e9c56-kube-api-access-mq7vb\") pod \"machine-approver-56656f9798-gl968\" (UID: \"21b7f343-d887-4bdf-85c0-9639179e9c56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.117516 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.137643 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.157684 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.178458 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.198594 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.217535 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.237557 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.256647 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.277841 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.283467 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-m9zzn"] Jan 29 11:01:16 crc kubenswrapper[4593]: W0129 11:01:16.293405 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddabb0548_dbdb_438c_a98c_2eb6e2b2c0d9.slice/crio-ef13c43f67220a68a0302026a063a5119b05d414e0e4b778e47f86ed7a4f73d1 WatchSource:0}: Error finding container ef13c43f67220a68a0302026a063a5119b05d414e0e4b778e47f86ed7a4f73d1: Status 404 returned error can't find the container with id ef13c43f67220a68a0302026a063a5119b05d414e0e4b778e47f86ed7a4f73d1 Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.295301 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.297040 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.305611 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h"] Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.318465 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.331190 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58"] Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.338592 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 11:01:16 crc kubenswrapper[4593]: W0129 11:01:16.341548 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d100ddd_343c_48f6_ad0a_e08d3e23a904.slice/crio-430d411464bb34eb6bcacc91fa870f01ce66a61a74d961098d1c64c3a1da900d WatchSource:0}: Error finding container 430d411464bb34eb6bcacc91fa870f01ce66a61a74d961098d1c64c3a1da900d: Status 404 returned error can't find the container with id 430d411464bb34eb6bcacc91fa870f01ce66a61a74d961098d1c64c3a1da900d Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.350412 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.357772 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 11:01:16 crc kubenswrapper[4593]: W0129 11:01:16.371015 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21b7f343_d887_4bdf_85c0_9639179e9c56.slice/crio-6a7fec5b24991a80130c767d90052f8071d829bf02577def2e5028fca2a30758 WatchSource:0}: Error finding container 6a7fec5b24991a80130c767d90052f8071d829bf02577def2e5028fca2a30758: Status 404 returned error can't find the container with id 6a7fec5b24991a80130c767d90052f8071d829bf02577def2e5028fca2a30758 Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.381180 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.397787 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.417451 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.434535 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k"] Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.440374 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.448798 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9td98"] Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.456960 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 11:01:16 crc kubenswrapper[4593]: W0129 11:01:16.460964 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76a22425_a78d_4304_b158_f577c6ef4c4f.slice/crio-334a01364083a20e9cff55591ab0397980e71497fd4d2b540c48088a18808a8d WatchSource:0}: Error finding container 334a01364083a20e9cff55591ab0397980e71497fd4d2b540c48088a18808a8d: Status 404 returned error can't find the container with id 334a01364083a20e9cff55591ab0397980e71497fd4d2b540c48088a18808a8d Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.477970 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.479151 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7"] Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.497620 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.518199 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 11:01:16 crc kubenswrapper[4593]: W0129 11:01:16.518878 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43e8598d_f86e_425e_8418_bcfb93e3bd63.slice/crio-b612bf39ff3fb29fdaefee7b832d03191002c86e3910e2c824c2f09ecd34a8e8 WatchSource:0}: Error finding container b612bf39ff3fb29fdaefee7b832d03191002c86e3910e2c824c2f09ecd34a8e8: Status 404 returned error can't find the container with id b612bf39ff3fb29fdaefee7b832d03191002c86e3910e2c824c2f09ecd34a8e8 Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.540292 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.568613 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.579116 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.597453 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.620093 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.637068 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.657403 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.697809 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.725135 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.727363 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k" event={"ID":"10bf1dd7-30e3-48b9-9651-dcda2f63e89d","Type":"ContainerStarted","Data":"72c1981c91f3459f12949aa930bfc87fd00416da06ce2e5298707aa11ecf8106"} Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.728067 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k" event={"ID":"10bf1dd7-30e3-48b9-9651-dcda2f63e89d","Type":"ContainerStarted","Data":"cb292b817086fa29bdd36ed2260478bb8f786f2e72ccac803988b117e65dd3ab"} Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.729320 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" event={"ID":"43e8598d-f86e-425e-8418-bcfb93e3bd63","Type":"ContainerStarted","Data":"b612bf39ff3fb29fdaefee7b832d03191002c86e3910e2c824c2f09ecd34a8e8"} Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.731182 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58" event={"ID":"3d100ddd-343c-48f6-ad0a-e08d3e23a904","Type":"ContainerStarted","Data":"aca9e9c874775aaafe530b40cb5d5bbc4cb5873d4dcbdc4734f8788f6947a7cf"} Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.731212 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58" event={"ID":"3d100ddd-343c-48f6-ad0a-e08d3e23a904","Type":"ContainerStarted","Data":"430d411464bb34eb6bcacc91fa870f01ce66a61a74d961098d1c64c3a1da900d"} Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.733689 4593 generic.go:334] "Generic (PLEG): container finished" podID="dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9" containerID="ea824cae612e38a73d8eebdcc401a4ebea50907fa6711e8a50aae46ac9a1cc2a" exitCode=0 Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.733750 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" event={"ID":"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9","Type":"ContainerDied","Data":"ea824cae612e38a73d8eebdcc401a4ebea50907fa6711e8a50aae46ac9a1cc2a"} Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.733768 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" event={"ID":"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9","Type":"ContainerStarted","Data":"ef13c43f67220a68a0302026a063a5119b05d414e0e4b778e47f86ed7a4f73d1"} Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.738499 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" event={"ID":"a62104dd-d659-409a-b8f5-85aaf2856a14","Type":"ContainerStarted","Data":"acbb97693467425ef2ea6a339415e6dda1d0d67a81e3c8acbbbd9196103ea943"} Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.738535 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" event={"ID":"a62104dd-d659-409a-b8f5-85aaf2856a14","Type":"ContainerStarted","Data":"9eed55ee0a88f35fc2bf20b9123f7aae8a2cd1091b8b30b1223e2725c98e46d9"} Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.738779 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.739038 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.741967 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" event={"ID":"21b7f343-d887-4bdf-85c0-9639179e9c56","Type":"ContainerStarted","Data":"e2dc054b9821ef55d0dadbbf18c2f3d134fd990c3496cee804b35dab95a78762"} Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.742002 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" event={"ID":"21b7f343-d887-4bdf-85c0-9639179e9c56","Type":"ContainerStarted","Data":"6a7fec5b24991a80130c767d90052f8071d829bf02577def2e5028fca2a30758"} Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.742499 4593 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-fnv5h container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.742543 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" podUID="a62104dd-d659-409a-b8f5-85aaf2856a14" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.746082 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" event={"ID":"76a22425-a78d-4304-b158-f577c6ef4c4f","Type":"ContainerStarted","Data":"9eac3a17a0d80747b4c19589283eedb53fbdc19757a21659394b8e0db2f8d72d"} Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.746146 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" event={"ID":"76a22425-a78d-4304-b158-f577c6ef4c4f","Type":"ContainerStarted","Data":"334a01364083a20e9cff55591ab0397980e71497fd4d2b540c48088a18808a8d"} Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.746354 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.747182 4593 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9td98 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.747233 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" podUID="76a22425-a78d-4304-b158-f577c6ef4c4f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.762016 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.777710 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.797335 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.817470 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.838645 4593 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.858037 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.877506 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.898040 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.918483 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.937353 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.962798 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc-stats-auth\") pod \"router-default-5444994796-xx52v\" (UID: \"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc\") " pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.962881 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/066b2b93-4946-44cf-9757-05c8282cb7a3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.962901 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-etcd-client\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.962916 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/51f11901-9a27-4368-9e6d-9ae05692c942-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ldr8c\" (UID: \"51f11901-9a27-4368-9e6d-9ae05692c942\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.962953 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vpzz\" (UniqueName: \"kubernetes.io/projected/1c91d49f-a382-4279-91c7-a43b3f1e071e-kube-api-access-2vpzz\") pod \"machine-config-controller-84d6567774-lrstj\" (UID: \"1c91d49f-a382-4279-91c7-a43b3f1e071e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.963000 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/51f11901-9a27-4368-9e6d-9ae05692c942-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ldr8c\" (UID: \"51f11901-9a27-4368-9e6d-9ae05692c942\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.963033 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.963068 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bb259eac-6aa7-42d9-883b-2af6b63af4b8-images\") pod \"machine-api-operator-5694c8668f-vtdww\" (UID: \"bb259eac-6aa7-42d9-883b-2af6b63af4b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.963151 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/661d5765-a5d7-4cb4-87b9-284f36dc385e-config\") pod \"console-operator-58897d9998-fm7cc\" (UID: \"661d5765-a5d7-4cb4-87b9-284f36dc385e\") " pod="openshift-console-operator/console-operator-58897d9998-fm7cc" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.963803 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk9pp\" (UniqueName: \"kubernetes.io/projected/5d8acfc6-0334-4294-8dd6-c3091ebb69d3-kube-api-access-bk9pp\") pod \"cluster-samples-operator-665b6dd947-6dlwj\" (UID: \"5d8acfc6-0334-4294-8dd6-c3091ebb69d3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.963971 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/661d5765-a5d7-4cb4-87b9-284f36dc385e-serving-cert\") pod \"console-operator-58897d9998-fm7cc\" (UID: \"661d5765-a5d7-4cb4-87b9-284f36dc385e\") " pod="openshift-console-operator/console-operator-58897d9998-fm7cc" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964048 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964069 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8246045d-6937-4d02-b488-24bcf2eec4bf-serving-cert\") pod \"authentication-operator-69f744f599-gz9wd\" (UID: \"8246045d-6937-4d02-b488-24bcf2eec4bf\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964102 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc-metrics-certs\") pod \"router-default-5444994796-xx52v\" (UID: \"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc\") " pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964118 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-trusted-ca-bundle\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964142 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/066b2b93-4946-44cf-9757-05c8282cb7a3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964159 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t2jr\" (UniqueName: \"kubernetes.io/projected/fa5b3597-636e-4cf0-ad99-755378e23867-kube-api-access-5t2jr\") pod \"downloads-7954f5f757-t7wn4\" (UID: \"fa5b3597-636e-4cf0-ad99-755378e23867\") " pod="openshift-console/downloads-7954f5f757-t7wn4" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964175 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8246045d-6937-4d02-b488-24bcf2eec4bf-service-ca-bundle\") pod \"authentication-operator-69f744f599-gz9wd\" (UID: \"8246045d-6937-4d02-b488-24bcf2eec4bf\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964195 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc-service-ca-bundle\") pod \"router-default-5444994796-xx52v\" (UID: \"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc\") " pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964213 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n97j8\" (UniqueName: \"kubernetes.io/projected/dc1056e0-74e9-4be8-bcdf-92604e23a2e1-kube-api-access-n97j8\") pod \"machine-config-operator-74547568cd-qjbwn\" (UID: \"dc1056e0-74e9-4be8-bcdf-92604e23a2e1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964276 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964318 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/915745e3-1528-4d5f-84a6-001471123924-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ct922\" (UID: \"915745e3-1528-4d5f-84a6-001471123924\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964387 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51f11901-9a27-4368-9e6d-9ae05692c942-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ldr8c\" (UID: \"51f11901-9a27-4368-9e6d-9ae05692c942\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964489 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc1056e0-74e9-4be8-bcdf-92604e23a2e1-images\") pod \"machine-config-operator-74547568cd-qjbwn\" (UID: \"dc1056e0-74e9-4be8-bcdf-92604e23a2e1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964519 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/066b2b93-4946-44cf-9757-05c8282cb7a3-trusted-ca\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964546 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc-default-certificate\") pod \"router-default-5444994796-xx52v\" (UID: \"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc\") " pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964583 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ccb12507-4eef-467d-885d-982c68807bda-console-oauth-config\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964610 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-bound-sa-token\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964627 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/915745e3-1528-4d5f-84a6-001471123924-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ct922\" (UID: \"915745e3-1528-4d5f-84a6-001471123924\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964656 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02bd78b0-707f-4422-8b39-bd751a8cdcd6-config\") pod \"kube-controller-manager-operator-78b949d7b-9cr59\" (UID: \"02bd78b0-707f-4422-8b39-bd751a8cdcd6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964675 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/478971f0-c97c-4eb1-86d2-50af06b8aafc-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gmw8k\" (UID: \"478971f0-c97c-4eb1-86d2-50af06b8aafc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964732 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-etcd-service-ca\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965061 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-registry-tls\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965091 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965134 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965152 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965169 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8246045d-6937-4d02-b488-24bcf2eec4bf-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gz9wd\" (UID: \"8246045d-6937-4d02-b488-24bcf2eec4bf\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965182 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-etcd-client\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965201 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dc1056e0-74e9-4be8-bcdf-92604e23a2e1-proxy-tls\") pod \"machine-config-operator-74547568cd-qjbwn\" (UID: \"dc1056e0-74e9-4be8-bcdf-92604e23a2e1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965227 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-etcd-ca\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965242 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-service-ca\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965256 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccb12507-4eef-467d-885d-982c68807bda-console-serving-cert\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965270 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02bd78b0-707f-4422-8b39-bd751a8cdcd6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9cr59\" (UID: \"02bd78b0-707f-4422-8b39-bd751a8cdcd6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965299 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965313 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965327 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x7r8\" (UniqueName: \"kubernetes.io/projected/bb259eac-6aa7-42d9-883b-2af6b63af4b8-kube-api-access-5x7r8\") pod \"machine-api-operator-5694c8668f-vtdww\" (UID: \"bb259eac-6aa7-42d9-883b-2af6b63af4b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965341 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqrvc\" (UniqueName: \"kubernetes.io/projected/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-kube-api-access-lqrvc\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965354 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/478971f0-c97c-4eb1-86d2-50af06b8aafc-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gmw8k\" (UID: \"478971f0-c97c-4eb1-86d2-50af06b8aafc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965392 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dc1056e0-74e9-4be8-bcdf-92604e23a2e1-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qjbwn\" (UID: \"dc1056e0-74e9-4be8-bcdf-92604e23a2e1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965409 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7lr9\" (UniqueName: \"kubernetes.io/projected/51f11901-9a27-4368-9e6d-9ae05692c942-kube-api-access-r7lr9\") pod \"cluster-image-registry-operator-dc59b4c8b-ldr8c\" (UID: \"51f11901-9a27-4368-9e6d-9ae05692c942\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965988 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1c91d49f-a382-4279-91c7-a43b3f1e071e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lrstj\" (UID: \"1c91d49f-a382-4279-91c7-a43b3f1e071e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.966008 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2697\" (UniqueName: \"kubernetes.io/projected/8246045d-6937-4d02-b488-24bcf2eec4bf-kube-api-access-l2697\") pod \"authentication-operator-69f744f599-gz9wd\" (UID: \"8246045d-6937-4d02-b488-24bcf2eec4bf\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.966068 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.966308 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q92mj\" (UniqueName: \"kubernetes.io/projected/e544204e-7186-4a22-a6bf-79a5101af4b6-kube-api-access-q92mj\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.966354 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbmg4\" (UniqueName: \"kubernetes.io/projected/661d5765-a5d7-4cb4-87b9-284f36dc385e-kube-api-access-fbmg4\") pod \"console-operator-58897d9998-fm7cc\" (UID: \"661d5765-a5d7-4cb4-87b9-284f36dc385e\") " pod="openshift-console-operator/console-operator-58897d9998-fm7cc" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.966377 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-encryption-config\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.966441 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vg54\" (UniqueName: \"kubernetes.io/projected/9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc-kube-api-access-4vg54\") pod \"router-default-5444994796-xx52v\" (UID: \"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc\") " pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.966459 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lzzp\" (UniqueName: \"kubernetes.io/projected/edf60cff-ba6c-450f-bcec-7b14d7513120-kube-api-access-7lzzp\") pod \"dns-operator-744455d44c-l64wd\" (UID: \"edf60cff-ba6c-450f-bcec-7b14d7513120\") " pod="openshift-dns-operator/dns-operator-744455d44c-l64wd" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.966478 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.966494 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-console-config\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.966597 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02bd78b0-707f-4422-8b39-bd751a8cdcd6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9cr59\" (UID: \"02bd78b0-707f-4422-8b39-bd751a8cdcd6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.966622 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2plf\" (UniqueName: \"kubernetes.io/projected/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-kube-api-access-t2plf\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.966767 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.966786 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.966954 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57zkh\" (UniqueName: \"kubernetes.io/projected/ccb12507-4eef-467d-885d-982c68807bda-kube-api-access-57zkh\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.966999 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1c91d49f-a382-4279-91c7-a43b3f1e071e-proxy-tls\") pod \"machine-config-controller-84d6567774-lrstj\" (UID: \"1c91d49f-a382-4279-91c7-a43b3f1e071e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967083 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-audit-policies\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967110 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-oauth-serving-cert\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967134 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5d8acfc6-0334-4294-8dd6-c3091ebb69d3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6dlwj\" (UID: \"5d8acfc6-0334-4294-8dd6-c3091ebb69d3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967164 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb259eac-6aa7-42d9-883b-2af6b63af4b8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vtdww\" (UID: \"bb259eac-6aa7-42d9-883b-2af6b63af4b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967200 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/edf60cff-ba6c-450f-bcec-7b14d7513120-metrics-tls\") pod \"dns-operator-744455d44c-l64wd\" (UID: \"edf60cff-ba6c-450f-bcec-7b14d7513120\") " pod="openshift-dns-operator/dns-operator-744455d44c-l64wd" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967329 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-serving-cert\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967367 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb259eac-6aa7-42d9-883b-2af6b63af4b8-config\") pod \"machine-api-operator-5694c8668f-vtdww\" (UID: \"bb259eac-6aa7-42d9-883b-2af6b63af4b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967394 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/478971f0-c97c-4eb1-86d2-50af06b8aafc-config\") pod \"kube-apiserver-operator-766d6c64bb-gmw8k\" (UID: \"478971f0-c97c-4eb1-86d2-50af06b8aafc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967418 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-config\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967442 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-audit-policies\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967464 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e544204e-7186-4a22-a6bf-79a5101af4b6-audit-dir\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967513 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967545 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/661d5765-a5d7-4cb4-87b9-284f36dc385e-trusted-ca\") pod \"console-operator-58897d9998-fm7cc\" (UID: \"661d5765-a5d7-4cb4-87b9-284f36dc385e\") " pod="openshift-console-operator/console-operator-58897d9998-fm7cc" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967571 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8246045d-6937-4d02-b488-24bcf2eec4bf-config\") pod \"authentication-operator-69f744f599-gz9wd\" (UID: \"8246045d-6937-4d02-b488-24bcf2eec4bf\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967610 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967698 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9stq9\" (UniqueName: \"kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-kube-api-access-9stq9\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967739 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/066b2b93-4946-44cf-9757-05c8282cb7a3-registry-certificates\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:16 crc kubenswrapper[4593]: E0129 11:01:16.967887 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:17.467874326 +0000 UTC m=+143.340908517 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.968043 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-serving-cert\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.968098 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/915745e3-1528-4d5f-84a6-001471123924-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ct922\" (UID: \"915745e3-1528-4d5f-84a6-001471123924\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.968124 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-audit-dir\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.069361 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:17 crc kubenswrapper[4593]: E0129 11:01:17.069531 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:17.569503838 +0000 UTC m=+143.442538039 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.069667 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-registry-tls\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.069702 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.069729 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.069752 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.069780 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ee0cc5f-ef60-4aac-9a88-dd2a0c767afc-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rnn8b\" (UID: \"8ee0cc5f-ef60-4aac-9a88-dd2a0c767afc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rnn8b" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.069810 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8246045d-6937-4d02-b488-24bcf2eec4bf-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gz9wd\" (UID: \"8246045d-6937-4d02-b488-24bcf2eec4bf\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.069835 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-etcd-client\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.069859 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dc1056e0-74e9-4be8-bcdf-92604e23a2e1-proxy-tls\") pod \"machine-config-operator-74547568cd-qjbwn\" (UID: \"dc1056e0-74e9-4be8-bcdf-92604e23a2e1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.069885 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/28ad6acc-fb5e-4d71-9f36-492c3b1262d2-profile-collector-cert\") pod \"catalog-operator-68c6474976-vlh9s\" (UID: \"28ad6acc-fb5e-4d71-9f36-492c3b1262d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.069908 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/59084a0c-807b-47c9-b905-6e07817bcb89-tmpfs\") pod \"packageserver-d55dfcdfc-zpjgp\" (UID: \"59084a0c-807b-47c9-b905-6e07817bcb89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.069937 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25ddj\" (UniqueName: \"kubernetes.io/projected/fae65f9f-a5ea-442a-8c78-aa650d330c4d-kube-api-access-25ddj\") pod \"service-ca-operator-777779d784-rpfbq\" (UID: \"fae65f9f-a5ea-442a-8c78-aa650d330c4d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.069972 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-etcd-ca\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.069996 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-service-ca\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.070731 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9bce548b-2c64-4ac5-a797-979de4cf7656-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pf5p2\" (UID: \"9bce548b-2c64-4ac5-a797-979de4cf7656\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf5p2" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.070769 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccb12507-4eef-467d-885d-982c68807bda-console-serving-cert\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.070794 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02bd78b0-707f-4422-8b39-bd751a8cdcd6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9cr59\" (UID: \"02bd78b0-707f-4422-8b39-bd751a8cdcd6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.070819 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c5d626cc-ab7a-408c-9955-c3fc676a799b-signing-cabundle\") pod \"service-ca-9c57cc56f-96whs\" (UID: \"c5d626cc-ab7a-408c-9955-c3fc676a799b\") " pod="openshift-service-ca/service-ca-9c57cc56f-96whs" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.070851 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.070875 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65e50d23-1adc-4462-9424-1d2157c2ff93-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-c8vv4\" (UID: \"65e50d23-1adc-4462-9424-1d2157c2ff93\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.070901 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.070928 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x7r8\" (UniqueName: \"kubernetes.io/projected/bb259eac-6aa7-42d9-883b-2af6b63af4b8-kube-api-access-5x7r8\") pod \"machine-api-operator-5694c8668f-vtdww\" (UID: \"bb259eac-6aa7-42d9-883b-2af6b63af4b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.070954 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqrvc\" (UniqueName: \"kubernetes.io/projected/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-kube-api-access-lqrvc\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.070979 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/478971f0-c97c-4eb1-86d2-50af06b8aafc-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gmw8k\" (UID: \"478971f0-c97c-4eb1-86d2-50af06b8aafc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.071002 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-config-volume\") pod \"collect-profiles-29494740-bkdhm\" (UID: \"eef5dc1f-d576-46dd-9de7-2a63c6d4157f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.071043 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dc1056e0-74e9-4be8-bcdf-92604e23a2e1-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qjbwn\" (UID: \"dc1056e0-74e9-4be8-bcdf-92604e23a2e1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.071070 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hw52m\" (UID: \"0aa74baf-fde3-4dad-aef0-7b8b1ae90098\") " pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.071092 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e9136490-ddbf-4318-91c6-e73d74e7b599-plugins-dir\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.071121 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7lr9\" (UniqueName: \"kubernetes.io/projected/51f11901-9a27-4368-9e6d-9ae05692c942-kube-api-access-r7lr9\") pod \"cluster-image-registry-operator-dc59b4c8b-ldr8c\" (UID: \"51f11901-9a27-4368-9e6d-9ae05692c942\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.071151 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqqbr\" (UniqueName: \"kubernetes.io/projected/0a7ffb2d-39e9-426f-9364-ebe193a5adc8-kube-api-access-hqqbr\") pod \"dns-default-29j27\" (UID: \"0a7ffb2d-39e9-426f-9364-ebe193a5adc8\") " pod="openshift-dns/dns-default-29j27" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.071177 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1c91d49f-a382-4279-91c7-a43b3f1e071e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lrstj\" (UID: \"1c91d49f-a382-4279-91c7-a43b3f1e071e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072552 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2697\" (UniqueName: \"kubernetes.io/projected/8246045d-6937-4d02-b488-24bcf2eec4bf-kube-api-access-l2697\") pod \"authentication-operator-69f744f599-gz9wd\" (UID: \"8246045d-6937-4d02-b488-24bcf2eec4bf\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072595 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws2zw\" (UniqueName: \"kubernetes.io/projected/65e50d23-1adc-4462-9424-1d2157c2ff93-kube-api-access-ws2zw\") pod \"kube-storage-version-migrator-operator-b67b599dd-c8vv4\" (UID: \"65e50d23-1adc-4462-9424-1d2157c2ff93\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072627 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072666 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q92mj\" (UniqueName: \"kubernetes.io/projected/e544204e-7186-4a22-a6bf-79a5101af4b6-kube-api-access-q92mj\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072692 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlw88\" (UniqueName: \"kubernetes.io/projected/719f2fcb-45e2-4600-82d9-fbf4263201a2-kube-api-access-rlw88\") pod \"package-server-manager-789f6589d5-m8dfr\" (UID: \"719f2fcb-45e2-4600-82d9-fbf4263201a2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072718 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbmg4\" (UniqueName: \"kubernetes.io/projected/661d5765-a5d7-4cb4-87b9-284f36dc385e-kube-api-access-fbmg4\") pod \"console-operator-58897d9998-fm7cc\" (UID: \"661d5765-a5d7-4cb4-87b9-284f36dc385e\") " pod="openshift-console-operator/console-operator-58897d9998-fm7cc" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072743 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-encryption-config\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072771 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vg54\" (UniqueName: \"kubernetes.io/projected/9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc-kube-api-access-4vg54\") pod \"router-default-5444994796-xx52v\" (UID: \"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc\") " pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072793 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95lmg\" (UniqueName: \"kubernetes.io/projected/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-kube-api-access-95lmg\") pod \"collect-profiles-29494740-bkdhm\" (UID: \"eef5dc1f-d576-46dd-9de7-2a63c6d4157f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072819 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lzzp\" (UniqueName: \"kubernetes.io/projected/edf60cff-ba6c-450f-bcec-7b14d7513120-kube-api-access-7lzzp\") pod \"dns-operator-744455d44c-l64wd\" (UID: \"edf60cff-ba6c-450f-bcec-7b14d7513120\") " pod="openshift-dns-operator/dns-operator-744455d44c-l64wd" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072842 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/59084a0c-807b-47c9-b905-6e07817bcb89-webhook-cert\") pod \"packageserver-d55dfcdfc-zpjgp\" (UID: \"59084a0c-807b-47c9-b905-6e07817bcb89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072865 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e9136490-ddbf-4318-91c6-e73d74e7b599-socket-dir\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072896 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072919 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-console-config\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072944 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6910728e-feba-4826-8447-11f4cf860c30-profile-collector-cert\") pod \"olm-operator-6b444d44fb-g9wvz\" (UID: \"6910728e-feba-4826-8447-11f4cf860c30\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072971 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p88cj\" (UniqueName: \"kubernetes.io/projected/77144fd8-36e8-4c75-ae40-fd3c9bb1a6fd-kube-api-access-p88cj\") pod \"ingress-canary-jnw9r\" (UID: \"77144fd8-36e8-4c75-ae40-fd3c9bb1a6fd\") " pod="openshift-ingress-canary/ingress-canary-jnw9r" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072998 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e7651ef0-a985-4314-a20a-7103624a257a-metrics-tls\") pod \"ingress-operator-5b745b69d9-vdt9h\" (UID: \"e7651ef0-a985-4314-a20a-7103624a257a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073022 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/28ad6acc-fb5e-4d71-9f36-492c3b1262d2-srv-cert\") pod \"catalog-operator-68c6474976-vlh9s\" (UID: \"28ad6acc-fb5e-4d71-9f36-492c3b1262d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073052 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02bd78b0-707f-4422-8b39-bd751a8cdcd6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9cr59\" (UID: \"02bd78b0-707f-4422-8b39-bd751a8cdcd6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073076 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4n58\" (UniqueName: \"kubernetes.io/projected/59084a0c-807b-47c9-b905-6e07817bcb89-kube-api-access-k4n58\") pod \"packageserver-d55dfcdfc-zpjgp\" (UID: \"59084a0c-807b-47c9-b905-6e07817bcb89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073102 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2plf\" (UniqueName: \"kubernetes.io/projected/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-kube-api-access-t2plf\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073124 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bf0241bd-f637-4b8b-b78a-797549fe5da9-certs\") pod \"machine-config-server-vbsqg\" (UID: \"bf0241bd-f637-4b8b-b78a-797549fe5da9\") " pod="openshift-machine-config-operator/machine-config-server-vbsqg" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073144 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxwx5\" (UniqueName: \"kubernetes.io/projected/bf0241bd-f637-4b8b-b78a-797549fe5da9-kube-api-access-hxwx5\") pod \"machine-config-server-vbsqg\" (UID: \"bf0241bd-f637-4b8b-b78a-797549fe5da9\") " pod="openshift-machine-config-operator/machine-config-server-vbsqg" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073156 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073168 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073224 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073253 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e7651ef0-a985-4314-a20a-7103624a257a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vdt9h\" (UID: \"e7651ef0-a985-4314-a20a-7103624a257a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073276 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e9136490-ddbf-4318-91c6-e73d74e7b599-registration-dir\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073306 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57zkh\" (UniqueName: \"kubernetes.io/projected/ccb12507-4eef-467d-885d-982c68807bda-kube-api-access-57zkh\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073329 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0a7ffb2d-39e9-426f-9364-ebe193a5adc8-metrics-tls\") pod \"dns-default-29j27\" (UID: \"0a7ffb2d-39e9-426f-9364-ebe193a5adc8\") " pod="openshift-dns/dns-default-29j27" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073350 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srcl6\" (UniqueName: \"kubernetes.io/projected/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-kube-api-access-srcl6\") pod \"marketplace-operator-79b997595-hw52m\" (UID: \"0aa74baf-fde3-4dad-aef0-7b8b1ae90098\") " pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073398 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1c91d49f-a382-4279-91c7-a43b3f1e071e-proxy-tls\") pod \"machine-config-controller-84d6567774-lrstj\" (UID: \"1c91d49f-a382-4279-91c7-a43b3f1e071e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073420 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-audit-policies\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073440 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-oauth-serving-cert\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073466 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2cvq\" (UniqueName: \"kubernetes.io/projected/c5d626cc-ab7a-408c-9955-c3fc676a799b-kube-api-access-z2cvq\") pod \"service-ca-9c57cc56f-96whs\" (UID: \"c5d626cc-ab7a-408c-9955-c3fc676a799b\") " pod="openshift-service-ca/service-ca-9c57cc56f-96whs" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073489 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6910728e-feba-4826-8447-11f4cf860c30-srv-cert\") pod \"olm-operator-6b444d44fb-g9wvz\" (UID: \"6910728e-feba-4826-8447-11f4cf860c30\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073510 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c5d626cc-ab7a-408c-9955-c3fc676a799b-signing-key\") pod \"service-ca-9c57cc56f-96whs\" (UID: \"c5d626cc-ab7a-408c-9955-c3fc676a799b\") " pod="openshift-service-ca/service-ca-9c57cc56f-96whs" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073534 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvm7v\" (UniqueName: \"kubernetes.io/projected/8ee0cc5f-ef60-4aac-9a88-dd2a0c767afc-kube-api-access-cvm7v\") pod \"multus-admission-controller-857f4d67dd-rnn8b\" (UID: \"8ee0cc5f-ef60-4aac-9a88-dd2a0c767afc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rnn8b" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073609 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5d8acfc6-0334-4294-8dd6-c3091ebb69d3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6dlwj\" (UID: \"5d8acfc6-0334-4294-8dd6-c3091ebb69d3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.074366 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb259eac-6aa7-42d9-883b-2af6b63af4b8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vtdww\" (UID: \"bb259eac-6aa7-42d9-883b-2af6b63af4b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.074405 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/edf60cff-ba6c-450f-bcec-7b14d7513120-metrics-tls\") pod \"dns-operator-744455d44c-l64wd\" (UID: \"edf60cff-ba6c-450f-bcec-7b14d7513120\") " pod="openshift-dns-operator/dns-operator-744455d44c-l64wd" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.074425 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.075265 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.077892 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-audit-policies\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.078672 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-oauth-serving-cert\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.071319 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-etcd-ca\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.079463 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dc1056e0-74e9-4be8-bcdf-92604e23a2e1-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qjbwn\" (UID: \"dc1056e0-74e9-4be8-bcdf-92604e23a2e1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072242 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8246045d-6937-4d02-b488-24bcf2eec4bf-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gz9wd\" (UID: \"8246045d-6937-4d02-b488-24bcf2eec4bf\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072185 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-service-ca\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.079960 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.080002 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-etcd-client\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.080275 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.080396 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1c91d49f-a382-4279-91c7-a43b3f1e071e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lrstj\" (UID: \"1c91d49f-a382-4279-91c7-a43b3f1e071e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.080844 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.082342 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-console-config\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.074428 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-serving-cert\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.083071 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.083279 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77144fd8-36e8-4c75-ae40-fd3c9bb1a6fd-cert\") pod \"ingress-canary-jnw9r\" (UID: \"77144fd8-36e8-4c75-ae40-fd3c9bb1a6fd\") " pod="openshift-ingress-canary/ingress-canary-jnw9r" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.083303 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bf0241bd-f637-4b8b-b78a-797549fe5da9-node-bootstrap-token\") pod \"machine-config-server-vbsqg\" (UID: \"bf0241bd-f637-4b8b-b78a-797549fe5da9\") " pod="openshift-machine-config-operator/machine-config-server-vbsqg" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.083342 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb259eac-6aa7-42d9-883b-2af6b63af4b8-config\") pod \"machine-api-operator-5694c8668f-vtdww\" (UID: \"bb259eac-6aa7-42d9-883b-2af6b63af4b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.083359 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-config\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.083401 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-audit-policies\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.083419 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e544204e-7186-4a22-a6bf-79a5101af4b6-audit-dir\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.083436 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/478971f0-c97c-4eb1-86d2-50af06b8aafc-config\") pod \"kube-apiserver-operator-766d6c64bb-gmw8k\" (UID: \"478971f0-c97c-4eb1-86d2-50af06b8aafc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.083453 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fae65f9f-a5ea-442a-8c78-aa650d330c4d-serving-cert\") pod \"service-ca-operator-777779d784-rpfbq\" (UID: \"fae65f9f-a5ea-442a-8c78-aa650d330c4d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.084440 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-audit-policies\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.084815 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-config\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.084944 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-encryption-config\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.084998 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e544204e-7186-4a22-a6bf-79a5101af4b6-audit-dir\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.085045 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: E0129 11:01:17.085245 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:17.585234117 +0000 UTC m=+143.458268308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095213 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/661d5765-a5d7-4cb4-87b9-284f36dc385e-trusted-ca\") pod \"console-operator-58897d9998-fm7cc\" (UID: \"661d5765-a5d7-4cb4-87b9-284f36dc385e\") " pod="openshift-console-operator/console-operator-58897d9998-fm7cc" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095268 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8246045d-6937-4d02-b488-24bcf2eec4bf-config\") pod \"authentication-operator-69f744f599-gz9wd\" (UID: \"8246045d-6937-4d02-b488-24bcf2eec4bf\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095288 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e7651ef0-a985-4314-a20a-7103624a257a-trusted-ca\") pod \"ingress-operator-5b745b69d9-vdt9h\" (UID: \"e7651ef0-a985-4314-a20a-7103624a257a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095304 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-secret-volume\") pod \"collect-profiles-29494740-bkdhm\" (UID: \"eef5dc1f-d576-46dd-9de7-2a63c6d4157f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095334 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e9136490-ddbf-4318-91c6-e73d74e7b599-csi-data-dir\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095353 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095368 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/59084a0c-807b-47c9-b905-6e07817bcb89-apiservice-cert\") pod \"packageserver-d55dfcdfc-zpjgp\" (UID: \"59084a0c-807b-47c9-b905-6e07817bcb89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095389 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9stq9\" (UniqueName: \"kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-kube-api-access-9stq9\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095406 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbjn6\" (UniqueName: \"kubernetes.io/projected/58e36a23-974a-4afd-b226-bb194d489cf0-kube-api-access-vbjn6\") pod \"migrator-59844c95c7-8b552\" (UID: \"58e36a23-974a-4afd-b226-bb194d489cf0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8b552" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095421 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fae65f9f-a5ea-442a-8c78-aa650d330c4d-config\") pod \"service-ca-operator-777779d784-rpfbq\" (UID: \"fae65f9f-a5ea-442a-8c78-aa650d330c4d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095444 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/066b2b93-4946-44cf-9757-05c8282cb7a3-registry-certificates\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095466 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd7cw\" (UniqueName: \"kubernetes.io/projected/e7651ef0-a985-4314-a20a-7103624a257a-kube-api-access-dd7cw\") pod \"ingress-operator-5b745b69d9-vdt9h\" (UID: \"e7651ef0-a985-4314-a20a-7103624a257a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095480 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-serving-cert\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095494 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hw52m\" (UID: \"0aa74baf-fde3-4dad-aef0-7b8b1ae90098\") " pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095519 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/915745e3-1528-4d5f-84a6-001471123924-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ct922\" (UID: \"915745e3-1528-4d5f-84a6-001471123924\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095534 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-audit-dir\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095550 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc-stats-auth\") pod \"router-default-5444994796-xx52v\" (UID: \"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc\") " pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095565 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nksd\" (UniqueName: \"kubernetes.io/projected/9bce548b-2c64-4ac5-a797-979de4cf7656-kube-api-access-2nksd\") pod \"control-plane-machine-set-operator-78cbb6b69f-pf5p2\" (UID: \"9bce548b-2c64-4ac5-a797-979de4cf7656\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf5p2" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095579 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65e50d23-1adc-4462-9424-1d2157c2ff93-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-c8vv4\" (UID: \"65e50d23-1adc-4462-9424-1d2157c2ff93\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095597 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/066b2b93-4946-44cf-9757-05c8282cb7a3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095612 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-etcd-client\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.098307 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/51f11901-9a27-4368-9e6d-9ae05692c942-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ldr8c\" (UID: \"51f11901-9a27-4368-9e6d-9ae05692c942\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.098337 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vpzz\" (UniqueName: \"kubernetes.io/projected/1c91d49f-a382-4279-91c7-a43b3f1e071e-kube-api-access-2vpzz\") pod \"machine-config-controller-84d6567774-lrstj\" (UID: \"1c91d49f-a382-4279-91c7-a43b3f1e071e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.098359 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5tcj\" (UniqueName: \"kubernetes.io/projected/6910728e-feba-4826-8447-11f4cf860c30-kube-api-access-g5tcj\") pod \"olm-operator-6b444d44fb-g9wvz\" (UID: \"6910728e-feba-4826-8447-11f4cf860c30\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.098381 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/51f11901-9a27-4368-9e6d-9ae05692c942-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ldr8c\" (UID: \"51f11901-9a27-4368-9e6d-9ae05692c942\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.098402 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.098419 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bb259eac-6aa7-42d9-883b-2af6b63af4b8-images\") pod \"machine-api-operator-5694c8668f-vtdww\" (UID: \"bb259eac-6aa7-42d9-883b-2af6b63af4b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.090610 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/edf60cff-ba6c-450f-bcec-7b14d7513120-metrics-tls\") pod \"dns-operator-744455d44c-l64wd\" (UID: \"edf60cff-ba6c-450f-bcec-7b14d7513120\") " pod="openshift-dns-operator/dns-operator-744455d44c-l64wd" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.099661 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/661d5765-a5d7-4cb4-87b9-284f36dc385e-config\") pod \"console-operator-58897d9998-fm7cc\" (UID: \"661d5765-a5d7-4cb4-87b9-284f36dc385e\") " pod="openshift-console-operator/console-operator-58897d9998-fm7cc" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.099692 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk9pp\" (UniqueName: \"kubernetes.io/projected/5d8acfc6-0334-4294-8dd6-c3091ebb69d3-kube-api-access-bk9pp\") pod \"cluster-samples-operator-665b6dd947-6dlwj\" (UID: \"5d8acfc6-0334-4294-8dd6-c3091ebb69d3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.099710 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/661d5765-a5d7-4cb4-87b9-284f36dc385e-serving-cert\") pod \"console-operator-58897d9998-fm7cc\" (UID: \"661d5765-a5d7-4cb4-87b9-284f36dc385e\") " pod="openshift-console-operator/console-operator-58897d9998-fm7cc" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.099729 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.099750 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8246045d-6937-4d02-b488-24bcf2eec4bf-serving-cert\") pod \"authentication-operator-69f744f599-gz9wd\" (UID: \"8246045d-6937-4d02-b488-24bcf2eec4bf\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.099766 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e9136490-ddbf-4318-91c6-e73d74e7b599-mountpoint-dir\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.099784 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvdr5\" (UniqueName: \"kubernetes.io/projected/e9136490-ddbf-4318-91c6-e73d74e7b599-kube-api-access-vvdr5\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.099803 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/066b2b93-4946-44cf-9757-05c8282cb7a3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.099822 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t2jr\" (UniqueName: \"kubernetes.io/projected/fa5b3597-636e-4cf0-ad99-755378e23867-kube-api-access-5t2jr\") pod \"downloads-7954f5f757-t7wn4\" (UID: \"fa5b3597-636e-4cf0-ad99-755378e23867\") " pod="openshift-console/downloads-7954f5f757-t7wn4" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.099821 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1c91d49f-a382-4279-91c7-a43b3f1e071e-proxy-tls\") pod \"machine-config-controller-84d6567774-lrstj\" (UID: \"1c91d49f-a382-4279-91c7-a43b3f1e071e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.099837 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8246045d-6937-4d02-b488-24bcf2eec4bf-service-ca-bundle\") pod \"authentication-operator-69f744f599-gz9wd\" (UID: \"8246045d-6937-4d02-b488-24bcf2eec4bf\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.099989 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc-metrics-certs\") pod \"router-default-5444994796-xx52v\" (UID: \"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc\") " pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.100015 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-trusted-ca-bundle\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.100036 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/719f2fcb-45e2-4600-82d9-fbf4263201a2-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-m8dfr\" (UID: \"719f2fcb-45e2-4600-82d9-fbf4263201a2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.100162 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.100184 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/915745e3-1528-4d5f-84a6-001471123924-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ct922\" (UID: \"915745e3-1528-4d5f-84a6-001471123924\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.100233 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8246045d-6937-4d02-b488-24bcf2eec4bf-service-ca-bundle\") pod \"authentication-operator-69f744f599-gz9wd\" (UID: \"8246045d-6937-4d02-b488-24bcf2eec4bf\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.097950 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb259eac-6aa7-42d9-883b-2af6b63af4b8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vtdww\" (UID: \"bb259eac-6aa7-42d9-883b-2af6b63af4b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.085457 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/478971f0-c97c-4eb1-86d2-50af06b8aafc-config\") pod \"kube-apiserver-operator-766d6c64bb-gmw8k\" (UID: \"478971f0-c97c-4eb1-86d2-50af06b8aafc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.100475 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc-service-ca-bundle\") pod \"router-default-5444994796-xx52v\" (UID: \"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc\") " pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.100506 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n97j8\" (UniqueName: \"kubernetes.io/projected/dc1056e0-74e9-4be8-bcdf-92604e23a2e1-kube-api-access-n97j8\") pod \"machine-config-operator-74547568cd-qjbwn\" (UID: \"dc1056e0-74e9-4be8-bcdf-92604e23a2e1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.098273 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02bd78b0-707f-4422-8b39-bd751a8cdcd6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9cr59\" (UID: \"02bd78b0-707f-4422-8b39-bd751a8cdcd6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.100641 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51f11901-9a27-4368-9e6d-9ae05692c942-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ldr8c\" (UID: \"51f11901-9a27-4368-9e6d-9ae05692c942\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.100665 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc1056e0-74e9-4be8-bcdf-92604e23a2e1-images\") pod \"machine-config-operator-74547568cd-qjbwn\" (UID: \"dc1056e0-74e9-4be8-bcdf-92604e23a2e1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.100792 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/066b2b93-4946-44cf-9757-05c8282cb7a3-trusted-ca\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.100819 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc-default-certificate\") pod \"router-default-5444994796-xx52v\" (UID: \"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc\") " pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.100837 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ccb12507-4eef-467d-885d-982c68807bda-console-oauth-config\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.100921 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-serving-cert\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.100956 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-bound-sa-token\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.091893 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5d8acfc6-0334-4294-8dd6-c3091ebb69d3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6dlwj\" (UID: \"5d8acfc6-0334-4294-8dd6-c3091ebb69d3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.094117 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccb12507-4eef-467d-885d-982c68807bda-console-serving-cert\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.101826 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c95l2\" (UniqueName: \"kubernetes.io/projected/28ad6acc-fb5e-4d71-9f36-492c3b1262d2-kube-api-access-c95l2\") pod \"catalog-operator-68c6474976-vlh9s\" (UID: \"28ad6acc-fb5e-4d71-9f36-492c3b1262d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.101857 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-etcd-service-ca\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.101983 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/915745e3-1528-4d5f-84a6-001471123924-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ct922\" (UID: \"915745e3-1528-4d5f-84a6-001471123924\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.102005 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02bd78b0-707f-4422-8b39-bd751a8cdcd6-config\") pod \"kube-controller-manager-operator-78b949d7b-9cr59\" (UID: \"02bd78b0-707f-4422-8b39-bd751a8cdcd6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.102130 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dc1056e0-74e9-4be8-bcdf-92604e23a2e1-proxy-tls\") pod \"machine-config-operator-74547568cd-qjbwn\" (UID: \"dc1056e0-74e9-4be8-bcdf-92604e23a2e1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.102157 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/478971f0-c97c-4eb1-86d2-50af06b8aafc-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gmw8k\" (UID: \"478971f0-c97c-4eb1-86d2-50af06b8aafc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.102199 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a7ffb2d-39e9-426f-9364-ebe193a5adc8-config-volume\") pod \"dns-default-29j27\" (UID: \"0a7ffb2d-39e9-426f-9364-ebe193a5adc8\") " pod="openshift-dns/dns-default-29j27" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.101676 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/478971f0-c97c-4eb1-86d2-50af06b8aafc-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gmw8k\" (UID: \"478971f0-c97c-4eb1-86d2-50af06b8aafc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.103321 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.104150 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02bd78b0-707f-4422-8b39-bd751a8cdcd6-config\") pod \"kube-controller-manager-operator-78b949d7b-9cr59\" (UID: \"02bd78b0-707f-4422-8b39-bd751a8cdcd6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.104711 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-etcd-service-ca\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.106941 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.106993 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/915745e3-1528-4d5f-84a6-001471123924-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ct922\" (UID: \"915745e3-1528-4d5f-84a6-001471123924\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.107044 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-audit-dir\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.107510 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51f11901-9a27-4368-9e6d-9ae05692c942-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ldr8c\" (UID: \"51f11901-9a27-4368-9e6d-9ae05692c942\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.107680 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.108606 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb259eac-6aa7-42d9-883b-2af6b63af4b8-config\") pod \"machine-api-operator-5694c8668f-vtdww\" (UID: \"bb259eac-6aa7-42d9-883b-2af6b63af4b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.110131 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-trusted-ca-bundle\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.110309 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc-service-ca-bundle\") pod \"router-default-5444994796-xx52v\" (UID: \"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc\") " pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.110471 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc1056e0-74e9-4be8-bcdf-92604e23a2e1-images\") pod \"machine-config-operator-74547568cd-qjbwn\" (UID: \"dc1056e0-74e9-4be8-bcdf-92604e23a2e1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.110819 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/066b2b93-4946-44cf-9757-05c8282cb7a3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.111400 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/661d5765-a5d7-4cb4-87b9-284f36dc385e-trusted-ca\") pod \"console-operator-58897d9998-fm7cc\" (UID: \"661d5765-a5d7-4cb4-87b9-284f36dc385e\") " pod="openshift-console-operator/console-operator-58897d9998-fm7cc" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.111423 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bb259eac-6aa7-42d9-883b-2af6b63af4b8-images\") pod \"machine-api-operator-5694c8668f-vtdww\" (UID: \"bb259eac-6aa7-42d9-883b-2af6b63af4b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.111741 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-registry-tls\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.112069 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/661d5765-a5d7-4cb4-87b9-284f36dc385e-config\") pod \"console-operator-58897d9998-fm7cc\" (UID: \"661d5765-a5d7-4cb4-87b9-284f36dc385e\") " pod="openshift-console-operator/console-operator-58897d9998-fm7cc" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.112083 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8246045d-6937-4d02-b488-24bcf2eec4bf-config\") pod \"authentication-operator-69f744f599-gz9wd\" (UID: \"8246045d-6937-4d02-b488-24bcf2eec4bf\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.112608 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/066b2b93-4946-44cf-9757-05c8282cb7a3-registry-certificates\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.114802 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/066b2b93-4946-44cf-9757-05c8282cb7a3-trusted-ca\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.114860 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x7r8\" (UniqueName: \"kubernetes.io/projected/bb259eac-6aa7-42d9-883b-2af6b63af4b8-kube-api-access-5x7r8\") pod \"machine-api-operator-5694c8668f-vtdww\" (UID: \"bb259eac-6aa7-42d9-883b-2af6b63af4b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.114998 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/661d5765-a5d7-4cb4-87b9-284f36dc385e-serving-cert\") pod \"console-operator-58897d9998-fm7cc\" (UID: \"661d5765-a5d7-4cb4-87b9-284f36dc385e\") " pod="openshift-console-operator/console-operator-58897d9998-fm7cc" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.115096 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.115187 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc-stats-auth\") pod \"router-default-5444994796-xx52v\" (UID: \"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc\") " pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.118943 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ccb12507-4eef-467d-885d-982c68807bda-console-oauth-config\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.119196 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-serving-cert\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.119358 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8246045d-6937-4d02-b488-24bcf2eec4bf-serving-cert\") pod \"authentication-operator-69f744f599-gz9wd\" (UID: \"8246045d-6937-4d02-b488-24bcf2eec4bf\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.119466 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.119814 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/915745e3-1528-4d5f-84a6-001471123924-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ct922\" (UID: \"915745e3-1528-4d5f-84a6-001471123924\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.119993 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.120127 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/51f11901-9a27-4368-9e6d-9ae05692c942-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ldr8c\" (UID: \"51f11901-9a27-4368-9e6d-9ae05692c942\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.120499 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/066b2b93-4946-44cf-9757-05c8282cb7a3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.120543 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc-metrics-certs\") pod \"router-default-5444994796-xx52v\" (UID: \"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc\") " pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.122239 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-etcd-client\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.122645 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc-default-certificate\") pod \"router-default-5444994796-xx52v\" (UID: \"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc\") " pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.132583 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57zkh\" (UniqueName: \"kubernetes.io/projected/ccb12507-4eef-467d-885d-982c68807bda-kube-api-access-57zkh\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.151775 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02bd78b0-707f-4422-8b39-bd751a8cdcd6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9cr59\" (UID: \"02bd78b0-707f-4422-8b39-bd751a8cdcd6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.171090 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.175931 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7lr9\" (UniqueName: \"kubernetes.io/projected/51f11901-9a27-4368-9e6d-9ae05692c942-kube-api-access-r7lr9\") pod \"cluster-image-registry-operator-dc59b4c8b-ldr8c\" (UID: \"51f11901-9a27-4368-9e6d-9ae05692c942\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.193227 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2697\" (UniqueName: \"kubernetes.io/projected/8246045d-6937-4d02-b488-24bcf2eec4bf-kube-api-access-l2697\") pod \"authentication-operator-69f744f599-gz9wd\" (UID: \"8246045d-6937-4d02-b488-24bcf2eec4bf\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.205335 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:17 crc kubenswrapper[4593]: E0129 11:01:17.205530 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:17.70549688 +0000 UTC m=+143.578531071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.205592 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbjn6\" (UniqueName: \"kubernetes.io/projected/58e36a23-974a-4afd-b226-bb194d489cf0-kube-api-access-vbjn6\") pod \"migrator-59844c95c7-8b552\" (UID: \"58e36a23-974a-4afd-b226-bb194d489cf0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8b552" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.205660 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fae65f9f-a5ea-442a-8c78-aa650d330c4d-config\") pod \"service-ca-operator-777779d784-rpfbq\" (UID: \"fae65f9f-a5ea-442a-8c78-aa650d330c4d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.205684 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd7cw\" (UniqueName: \"kubernetes.io/projected/e7651ef0-a985-4314-a20a-7103624a257a-kube-api-access-dd7cw\") pod \"ingress-operator-5b745b69d9-vdt9h\" (UID: \"e7651ef0-a985-4314-a20a-7103624a257a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.205702 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hw52m\" (UID: \"0aa74baf-fde3-4dad-aef0-7b8b1ae90098\") " pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.205732 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nksd\" (UniqueName: \"kubernetes.io/projected/9bce548b-2c64-4ac5-a797-979de4cf7656-kube-api-access-2nksd\") pod \"control-plane-machine-set-operator-78cbb6b69f-pf5p2\" (UID: \"9bce548b-2c64-4ac5-a797-979de4cf7656\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf5p2" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.205753 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65e50d23-1adc-4462-9424-1d2157c2ff93-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-c8vv4\" (UID: \"65e50d23-1adc-4462-9424-1d2157c2ff93\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.205784 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5tcj\" (UniqueName: \"kubernetes.io/projected/6910728e-feba-4826-8447-11f4cf860c30-kube-api-access-g5tcj\") pod \"olm-operator-6b444d44fb-g9wvz\" (UID: \"6910728e-feba-4826-8447-11f4cf860c30\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.205837 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e9136490-ddbf-4318-91c6-e73d74e7b599-mountpoint-dir\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.205868 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvdr5\" (UniqueName: \"kubernetes.io/projected/e9136490-ddbf-4318-91c6-e73d74e7b599-kube-api-access-vvdr5\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.205897 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/719f2fcb-45e2-4600-82d9-fbf4263201a2-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-m8dfr\" (UID: \"719f2fcb-45e2-4600-82d9-fbf4263201a2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.205948 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c95l2\" (UniqueName: \"kubernetes.io/projected/28ad6acc-fb5e-4d71-9f36-492c3b1262d2-kube-api-access-c95l2\") pod \"catalog-operator-68c6474976-vlh9s\" (UID: \"28ad6acc-fb5e-4d71-9f36-492c3b1262d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.205968 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a7ffb2d-39e9-426f-9364-ebe193a5adc8-config-volume\") pod \"dns-default-29j27\" (UID: \"0a7ffb2d-39e9-426f-9364-ebe193a5adc8\") " pod="openshift-dns/dns-default-29j27" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.205996 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ee0cc5f-ef60-4aac-9a88-dd2a0c767afc-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rnn8b\" (UID: \"8ee0cc5f-ef60-4aac-9a88-dd2a0c767afc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rnn8b" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.206017 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/28ad6acc-fb5e-4d71-9f36-492c3b1262d2-profile-collector-cert\") pod \"catalog-operator-68c6474976-vlh9s\" (UID: \"28ad6acc-fb5e-4d71-9f36-492c3b1262d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.206039 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/59084a0c-807b-47c9-b905-6e07817bcb89-tmpfs\") pod \"packageserver-d55dfcdfc-zpjgp\" (UID: \"59084a0c-807b-47c9-b905-6e07817bcb89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.206064 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25ddj\" (UniqueName: \"kubernetes.io/projected/fae65f9f-a5ea-442a-8c78-aa650d330c4d-kube-api-access-25ddj\") pod \"service-ca-operator-777779d784-rpfbq\" (UID: \"fae65f9f-a5ea-442a-8c78-aa650d330c4d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.206090 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9bce548b-2c64-4ac5-a797-979de4cf7656-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pf5p2\" (UID: \"9bce548b-2c64-4ac5-a797-979de4cf7656\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf5p2" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.206112 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c5d626cc-ab7a-408c-9955-c3fc676a799b-signing-cabundle\") pod \"service-ca-9c57cc56f-96whs\" (UID: \"c5d626cc-ab7a-408c-9955-c3fc676a799b\") " pod="openshift-service-ca/service-ca-9c57cc56f-96whs" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.206503 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fae65f9f-a5ea-442a-8c78-aa650d330c4d-config\") pod \"service-ca-operator-777779d784-rpfbq\" (UID: \"fae65f9f-a5ea-442a-8c78-aa650d330c4d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.206914 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a7ffb2d-39e9-426f-9364-ebe193a5adc8-config-volume\") pod \"dns-default-29j27\" (UID: \"0a7ffb2d-39e9-426f-9364-ebe193a5adc8\") " pod="openshift-dns/dns-default-29j27" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.207028 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hw52m\" (UID: \"0aa74baf-fde3-4dad-aef0-7b8b1ae90098\") " pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.207076 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/59084a0c-807b-47c9-b905-6e07817bcb89-tmpfs\") pod \"packageserver-d55dfcdfc-zpjgp\" (UID: \"59084a0c-807b-47c9-b905-6e07817bcb89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.207264 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e9136490-ddbf-4318-91c6-e73d74e7b599-mountpoint-dir\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208016 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c5d626cc-ab7a-408c-9955-c3fc676a799b-signing-cabundle\") pod \"service-ca-9c57cc56f-96whs\" (UID: \"c5d626cc-ab7a-408c-9955-c3fc676a799b\") " pod="openshift-service-ca/service-ca-9c57cc56f-96whs" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208080 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65e50d23-1adc-4462-9424-1d2157c2ff93-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-c8vv4\" (UID: \"65e50d23-1adc-4462-9424-1d2157c2ff93\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208375 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-config-volume\") pod \"collect-profiles-29494740-bkdhm\" (UID: \"eef5dc1f-d576-46dd-9de7-2a63c6d4157f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208413 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hw52m\" (UID: \"0aa74baf-fde3-4dad-aef0-7b8b1ae90098\") " pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208430 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e9136490-ddbf-4318-91c6-e73d74e7b599-plugins-dir\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208463 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqqbr\" (UniqueName: \"kubernetes.io/projected/0a7ffb2d-39e9-426f-9364-ebe193a5adc8-kube-api-access-hqqbr\") pod \"dns-default-29j27\" (UID: \"0a7ffb2d-39e9-426f-9364-ebe193a5adc8\") " pod="openshift-dns/dns-default-29j27" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208482 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws2zw\" (UniqueName: \"kubernetes.io/projected/65e50d23-1adc-4462-9424-1d2157c2ff93-kube-api-access-ws2zw\") pod \"kube-storage-version-migrator-operator-b67b599dd-c8vv4\" (UID: \"65e50d23-1adc-4462-9424-1d2157c2ff93\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208538 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlw88\" (UniqueName: \"kubernetes.io/projected/719f2fcb-45e2-4600-82d9-fbf4263201a2-kube-api-access-rlw88\") pod \"package-server-manager-789f6589d5-m8dfr\" (UID: \"719f2fcb-45e2-4600-82d9-fbf4263201a2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208568 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95lmg\" (UniqueName: \"kubernetes.io/projected/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-kube-api-access-95lmg\") pod \"collect-profiles-29494740-bkdhm\" (UID: \"eef5dc1f-d576-46dd-9de7-2a63c6d4157f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208591 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/59084a0c-807b-47c9-b905-6e07817bcb89-webhook-cert\") pod \"packageserver-d55dfcdfc-zpjgp\" (UID: \"59084a0c-807b-47c9-b905-6e07817bcb89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208610 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e9136490-ddbf-4318-91c6-e73d74e7b599-socket-dir\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208649 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6910728e-feba-4826-8447-11f4cf860c30-profile-collector-cert\") pod \"olm-operator-6b444d44fb-g9wvz\" (UID: \"6910728e-feba-4826-8447-11f4cf860c30\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208672 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p88cj\" (UniqueName: \"kubernetes.io/projected/77144fd8-36e8-4c75-ae40-fd3c9bb1a6fd-kube-api-access-p88cj\") pod \"ingress-canary-jnw9r\" (UID: \"77144fd8-36e8-4c75-ae40-fd3c9bb1a6fd\") " pod="openshift-ingress-canary/ingress-canary-jnw9r" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208695 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e7651ef0-a985-4314-a20a-7103624a257a-metrics-tls\") pod \"ingress-operator-5b745b69d9-vdt9h\" (UID: \"e7651ef0-a985-4314-a20a-7103624a257a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208709 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/28ad6acc-fb5e-4d71-9f36-492c3b1262d2-srv-cert\") pod \"catalog-operator-68c6474976-vlh9s\" (UID: \"28ad6acc-fb5e-4d71-9f36-492c3b1262d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208726 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4n58\" (UniqueName: \"kubernetes.io/projected/59084a0c-807b-47c9-b905-6e07817bcb89-kube-api-access-k4n58\") pod \"packageserver-d55dfcdfc-zpjgp\" (UID: \"59084a0c-807b-47c9-b905-6e07817bcb89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208746 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bf0241bd-f637-4b8b-b78a-797549fe5da9-certs\") pod \"machine-config-server-vbsqg\" (UID: \"bf0241bd-f637-4b8b-b78a-797549fe5da9\") " pod="openshift-machine-config-operator/machine-config-server-vbsqg" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208761 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxwx5\" (UniqueName: \"kubernetes.io/projected/bf0241bd-f637-4b8b-b78a-797549fe5da9-kube-api-access-hxwx5\") pod \"machine-config-server-vbsqg\" (UID: \"bf0241bd-f637-4b8b-b78a-797549fe5da9\") " pod="openshift-machine-config-operator/machine-config-server-vbsqg" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208778 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e7651ef0-a985-4314-a20a-7103624a257a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vdt9h\" (UID: \"e7651ef0-a985-4314-a20a-7103624a257a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208792 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e9136490-ddbf-4318-91c6-e73d74e7b599-registration-dir\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208809 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0a7ffb2d-39e9-426f-9364-ebe193a5adc8-metrics-tls\") pod \"dns-default-29j27\" (UID: \"0a7ffb2d-39e9-426f-9364-ebe193a5adc8\") " pod="openshift-dns/dns-default-29j27" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208825 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srcl6\" (UniqueName: \"kubernetes.io/projected/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-kube-api-access-srcl6\") pod \"marketplace-operator-79b997595-hw52m\" (UID: \"0aa74baf-fde3-4dad-aef0-7b8b1ae90098\") " pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208852 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2cvq\" (UniqueName: \"kubernetes.io/projected/c5d626cc-ab7a-408c-9955-c3fc676a799b-kube-api-access-z2cvq\") pod \"service-ca-9c57cc56f-96whs\" (UID: \"c5d626cc-ab7a-408c-9955-c3fc676a799b\") " pod="openshift-service-ca/service-ca-9c57cc56f-96whs" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208869 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6910728e-feba-4826-8447-11f4cf860c30-srv-cert\") pod \"olm-operator-6b444d44fb-g9wvz\" (UID: \"6910728e-feba-4826-8447-11f4cf860c30\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208884 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c5d626cc-ab7a-408c-9955-c3fc676a799b-signing-key\") pod \"service-ca-9c57cc56f-96whs\" (UID: \"c5d626cc-ab7a-408c-9955-c3fc676a799b\") " pod="openshift-service-ca/service-ca-9c57cc56f-96whs" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208903 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvm7v\" (UniqueName: \"kubernetes.io/projected/8ee0cc5f-ef60-4aac-9a88-dd2a0c767afc-kube-api-access-cvm7v\") pod \"multus-admission-controller-857f4d67dd-rnn8b\" (UID: \"8ee0cc5f-ef60-4aac-9a88-dd2a0c767afc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rnn8b" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208920 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77144fd8-36e8-4c75-ae40-fd3c9bb1a6fd-cert\") pod \"ingress-canary-jnw9r\" (UID: \"77144fd8-36e8-4c75-ae40-fd3c9bb1a6fd\") " pod="openshift-ingress-canary/ingress-canary-jnw9r" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208934 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bf0241bd-f637-4b8b-b78a-797549fe5da9-node-bootstrap-token\") pod \"machine-config-server-vbsqg\" (UID: \"bf0241bd-f637-4b8b-b78a-797549fe5da9\") " pod="openshift-machine-config-operator/machine-config-server-vbsqg" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208958 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208974 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fae65f9f-a5ea-442a-8c78-aa650d330c4d-serving-cert\") pod \"service-ca-operator-777779d784-rpfbq\" (UID: \"fae65f9f-a5ea-442a-8c78-aa650d330c4d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208991 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e7651ef0-a985-4314-a20a-7103624a257a-trusted-ca\") pod \"ingress-operator-5b745b69d9-vdt9h\" (UID: \"e7651ef0-a985-4314-a20a-7103624a257a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.209126 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-secret-volume\") pod \"collect-profiles-29494740-bkdhm\" (UID: \"eef5dc1f-d576-46dd-9de7-2a63c6d4157f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.209149 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e9136490-ddbf-4318-91c6-e73d74e7b599-csi-data-dir\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.209178 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/59084a0c-807b-47c9-b905-6e07817bcb89-apiservice-cert\") pod \"packageserver-d55dfcdfc-zpjgp\" (UID: \"59084a0c-807b-47c9-b905-6e07817bcb89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.210133 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9bce548b-2c64-4ac5-a797-979de4cf7656-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pf5p2\" (UID: \"9bce548b-2c64-4ac5-a797-979de4cf7656\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf5p2" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.210236 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65e50d23-1adc-4462-9424-1d2157c2ff93-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-c8vv4\" (UID: \"65e50d23-1adc-4462-9424-1d2157c2ff93\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.210521 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e9136490-ddbf-4318-91c6-e73d74e7b599-socket-dir\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.211585 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/59084a0c-807b-47c9-b905-6e07817bcb89-apiservice-cert\") pod \"packageserver-d55dfcdfc-zpjgp\" (UID: \"59084a0c-807b-47c9-b905-6e07817bcb89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.211723 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e9136490-ddbf-4318-91c6-e73d74e7b599-registration-dir\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.212782 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.212920 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/719f2fcb-45e2-4600-82d9-fbf4263201a2-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-m8dfr\" (UID: \"719f2fcb-45e2-4600-82d9-fbf4263201a2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.213287 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-config-volume\") pod \"collect-profiles-29494740-bkdhm\" (UID: \"eef5dc1f-d576-46dd-9de7-2a63c6d4157f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.213800 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65e50d23-1adc-4462-9424-1d2157c2ff93-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-c8vv4\" (UID: \"65e50d23-1adc-4462-9424-1d2157c2ff93\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.212790 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/28ad6acc-fb5e-4d71-9f36-492c3b1262d2-profile-collector-cert\") pod \"catalog-operator-68c6474976-vlh9s\" (UID: \"28ad6acc-fb5e-4d71-9f36-492c3b1262d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.213958 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e9136490-ddbf-4318-91c6-e73d74e7b599-plugins-dir\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.214254 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q92mj\" (UniqueName: \"kubernetes.io/projected/e544204e-7186-4a22-a6bf-79a5101af4b6-kube-api-access-q92mj\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.214280 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6910728e-feba-4826-8447-11f4cf860c30-profile-collector-cert\") pod \"olm-operator-6b444d44fb-g9wvz\" (UID: \"6910728e-feba-4826-8447-11f4cf860c30\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.214930 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0a7ffb2d-39e9-426f-9364-ebe193a5adc8-metrics-tls\") pod \"dns-default-29j27\" (UID: \"0a7ffb2d-39e9-426f-9364-ebe193a5adc8\") " pod="openshift-dns/dns-default-29j27" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.216074 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e7651ef0-a985-4314-a20a-7103624a257a-trusted-ca\") pod \"ingress-operator-5b745b69d9-vdt9h\" (UID: \"e7651ef0-a985-4314-a20a-7103624a257a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.216129 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ee0cc5f-ef60-4aac-9a88-dd2a0c767afc-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rnn8b\" (UID: \"8ee0cc5f-ef60-4aac-9a88-dd2a0c767afc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rnn8b" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.216298 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e9136490-ddbf-4318-91c6-e73d74e7b599-csi-data-dir\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.217113 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/28ad6acc-fb5e-4d71-9f36-492c3b1262d2-srv-cert\") pod \"catalog-operator-68c6474976-vlh9s\" (UID: \"28ad6acc-fb5e-4d71-9f36-492c3b1262d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.217201 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fae65f9f-a5ea-442a-8c78-aa650d330c4d-serving-cert\") pod \"service-ca-operator-777779d784-rpfbq\" (UID: \"fae65f9f-a5ea-442a-8c78-aa650d330c4d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq" Jan 29 11:01:17 crc kubenswrapper[4593]: E0129 11:01:17.217678 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:17.71766228 +0000 UTC m=+143.590696531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.219620 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bf0241bd-f637-4b8b-b78a-797549fe5da9-certs\") pod \"machine-config-server-vbsqg\" (UID: \"bf0241bd-f637-4b8b-b78a-797549fe5da9\") " pod="openshift-machine-config-operator/machine-config-server-vbsqg" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.219932 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77144fd8-36e8-4c75-ae40-fd3c9bb1a6fd-cert\") pod \"ingress-canary-jnw9r\" (UID: \"77144fd8-36e8-4c75-ae40-fd3c9bb1a6fd\") " pod="openshift-ingress-canary/ingress-canary-jnw9r" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.220246 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-secret-volume\") pod \"collect-profiles-29494740-bkdhm\" (UID: \"eef5dc1f-d576-46dd-9de7-2a63c6d4157f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.220456 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bf0241bd-f637-4b8b-b78a-797549fe5da9-node-bootstrap-token\") pod \"machine-config-server-vbsqg\" (UID: \"bf0241bd-f637-4b8b-b78a-797549fe5da9\") " pod="openshift-machine-config-operator/machine-config-server-vbsqg" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.220912 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e7651ef0-a985-4314-a20a-7103624a257a-metrics-tls\") pod \"ingress-operator-5b745b69d9-vdt9h\" (UID: \"e7651ef0-a985-4314-a20a-7103624a257a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.221299 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/59084a0c-807b-47c9-b905-6e07817bcb89-webhook-cert\") pod \"packageserver-d55dfcdfc-zpjgp\" (UID: \"59084a0c-807b-47c9-b905-6e07817bcb89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.221402 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6910728e-feba-4826-8447-11f4cf860c30-srv-cert\") pod \"olm-operator-6b444d44fb-g9wvz\" (UID: \"6910728e-feba-4826-8447-11f4cf860c30\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.221672 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hw52m\" (UID: \"0aa74baf-fde3-4dad-aef0-7b8b1ae90098\") " pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.223751 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c5d626cc-ab7a-408c-9955-c3fc676a799b-signing-key\") pod \"service-ca-9c57cc56f-96whs\" (UID: \"c5d626cc-ab7a-408c-9955-c3fc676a799b\") " pod="openshift-service-ca/service-ca-9c57cc56f-96whs" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.238029 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbmg4\" (UniqueName: \"kubernetes.io/projected/661d5765-a5d7-4cb4-87b9-284f36dc385e-kube-api-access-fbmg4\") pod \"console-operator-58897d9998-fm7cc\" (UID: \"661d5765-a5d7-4cb4-87b9-284f36dc385e\") " pod="openshift-console-operator/console-operator-58897d9998-fm7cc" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.259321 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vg54\" (UniqueName: \"kubernetes.io/projected/9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc-kube-api-access-4vg54\") pod \"router-default-5444994796-xx52v\" (UID: \"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc\") " pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.275729 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lzzp\" (UniqueName: \"kubernetes.io/projected/edf60cff-ba6c-450f-bcec-7b14d7513120-kube-api-access-7lzzp\") pod \"dns-operator-744455d44c-l64wd\" (UID: \"edf60cff-ba6c-450f-bcec-7b14d7513120\") " pod="openshift-dns-operator/dns-operator-744455d44c-l64wd" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.277795 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.300205 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.310237 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:17 crc kubenswrapper[4593]: E0129 11:01:17.310613 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:17.810583258 +0000 UTC m=+143.683617459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.311032 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.311304 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqrvc\" (UniqueName: \"kubernetes.io/projected/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-kube-api-access-lqrvc\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: E0129 11:01:17.311451 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:17.811435922 +0000 UTC m=+143.684470113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.312660 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2plf\" (UniqueName: \"kubernetes.io/projected/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-kube-api-access-t2plf\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.350360 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9stq9\" (UniqueName: \"kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-kube-api-access-9stq9\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.369954 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/51f11901-9a27-4368-9e6d-9ae05692c942-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ldr8c\" (UID: \"51f11901-9a27-4368-9e6d-9ae05692c942\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.407052 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vpzz\" (UniqueName: \"kubernetes.io/projected/1c91d49f-a382-4279-91c7-a43b3f1e071e-kube-api-access-2vpzz\") pod \"machine-config-controller-84d6567774-lrstj\" (UID: \"1c91d49f-a382-4279-91c7-a43b3f1e071e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.412003 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:17 crc kubenswrapper[4593]: E0129 11:01:17.412443 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:17.912424546 +0000 UTC m=+143.785458737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.412557 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.425104 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-bound-sa-token\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.431435 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/915745e3-1528-4d5f-84a6-001471123924-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ct922\" (UID: \"915745e3-1528-4d5f-84a6-001471123924\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.436616 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-fm7cc" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.442503 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n97j8\" (UniqueName: \"kubernetes.io/projected/dc1056e0-74e9-4be8-bcdf-92604e23a2e1-kube-api-access-n97j8\") pod \"machine-config-operator-74547568cd-qjbwn\" (UID: \"dc1056e0-74e9-4be8-bcdf-92604e23a2e1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.448523 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.454879 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/478971f0-c97c-4eb1-86d2-50af06b8aafc-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gmw8k\" (UID: \"478971f0-c97c-4eb1-86d2-50af06b8aafc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.506909 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk9pp\" (UniqueName: \"kubernetes.io/projected/5d8acfc6-0334-4294-8dd6-c3091ebb69d3-kube-api-access-bk9pp\") pod \"cluster-samples-operator-665b6dd947-6dlwj\" (UID: \"5d8acfc6-0334-4294-8dd6-c3091ebb69d3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.513460 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: E0129 11:01:17.513829 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:18.01378017 +0000 UTC m=+143.886814361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.515442 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.526332 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.528311 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t2jr\" (UniqueName: \"kubernetes.io/projected/fa5b3597-636e-4cf0-ad99-755378e23867-kube-api-access-5t2jr\") pod \"downloads-7954f5f757-t7wn4\" (UID: \"fa5b3597-636e-4cf0-ad99-755378e23867\") " pod="openshift-console/downloads-7954f5f757-t7wn4" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.546068 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.554610 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.558221 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nksd\" (UniqueName: \"kubernetes.io/projected/9bce548b-2c64-4ac5-a797-979de4cf7656-kube-api-access-2nksd\") pod \"control-plane-machine-set-operator-78cbb6b69f-pf5p2\" (UID: \"9bce548b-2c64-4ac5-a797-979de4cf7656\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf5p2" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.564045 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-8425v"] Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.565761 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-l64wd" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.582377 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbjn6\" (UniqueName: \"kubernetes.io/projected/58e36a23-974a-4afd-b226-bb194d489cf0-kube-api-access-vbjn6\") pod \"migrator-59844c95c7-8b552\" (UID: \"58e36a23-974a-4afd-b226-bb194d489cf0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8b552" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.583964 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.584026 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vtdww"] Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.585517 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd7cw\" (UniqueName: \"kubernetes.io/projected/e7651ef0-a985-4314-a20a-7103624a257a-kube-api-access-dd7cw\") pod \"ingress-operator-5b745b69d9-vdt9h\" (UID: \"e7651ef0-a985-4314-a20a-7103624a257a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.590943 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.606327 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.624252 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5tcj\" (UniqueName: \"kubernetes.io/projected/6910728e-feba-4826-8447-11f4cf860c30-kube-api-access-g5tcj\") pod \"olm-operator-6b444d44fb-g9wvz\" (UID: \"6910728e-feba-4826-8447-11f4cf860c30\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.628003 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c95l2\" (UniqueName: \"kubernetes.io/projected/28ad6acc-fb5e-4d71-9f36-492c3b1262d2-kube-api-access-c95l2\") pod \"catalog-operator-68c6474976-vlh9s\" (UID: \"28ad6acc-fb5e-4d71-9f36-492c3b1262d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.628376 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8b552" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.629930 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf5p2" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.630303 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:17 crc kubenswrapper[4593]: E0129 11:01:17.630614 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:18.130583375 +0000 UTC m=+144.003617566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.639056 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59"] Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.641615 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.667610 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25ddj\" (UniqueName: \"kubernetes.io/projected/fae65f9f-a5ea-442a-8c78-aa650d330c4d-kube-api-access-25ddj\") pod \"service-ca-operator-777779d784-rpfbq\" (UID: \"fae65f9f-a5ea-442a-8c78-aa650d330c4d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.671284 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvdr5\" (UniqueName: \"kubernetes.io/projected/e9136490-ddbf-4318-91c6-e73d74e7b599-kube-api-access-vvdr5\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.680712 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxwx5\" (UniqueName: \"kubernetes.io/projected/bf0241bd-f637-4b8b-b78a-797549fe5da9-kube-api-access-hxwx5\") pod \"machine-config-server-vbsqg\" (UID: \"bf0241bd-f637-4b8b-b78a-797549fe5da9\") " pod="openshift-machine-config-operator/machine-config-server-vbsqg" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.687071 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.707359 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.726405 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e7651ef0-a985-4314-a20a-7103624a257a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vdt9h\" (UID: \"e7651ef0-a985-4314-a20a-7103624a257a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.732023 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqqbr\" (UniqueName: \"kubernetes.io/projected/0a7ffb2d-39e9-426f-9364-ebe193a5adc8-kube-api-access-hqqbr\") pod \"dns-default-29j27\" (UID: \"0a7ffb2d-39e9-426f-9364-ebe193a5adc8\") " pod="openshift-dns/dns-default-29j27" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.736857 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: E0129 11:01:17.737503 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:18.237485575 +0000 UTC m=+144.110519776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.739426 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-vbsqg" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.753049 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-29j27" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.766120 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws2zw\" (UniqueName: \"kubernetes.io/projected/65e50d23-1adc-4462-9424-1d2157c2ff93-kube-api-access-ws2zw\") pod \"kube-storage-version-migrator-operator-b67b599dd-c8vv4\" (UID: \"65e50d23-1adc-4462-9424-1d2157c2ff93\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.766385 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-t7wn4" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.766393 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.767343 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlw88\" (UniqueName: \"kubernetes.io/projected/719f2fcb-45e2-4600-82d9-fbf4263201a2-kube-api-access-rlw88\") pod \"package-server-manager-789f6589d5-m8dfr\" (UID: \"719f2fcb-45e2-4600-82d9-fbf4263201a2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.796568 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4n58\" (UniqueName: \"kubernetes.io/projected/59084a0c-807b-47c9-b905-6e07817bcb89-kube-api-access-k4n58\") pod \"packageserver-d55dfcdfc-zpjgp\" (UID: \"59084a0c-807b-47c9-b905-6e07817bcb89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.806306 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.813650 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srcl6\" (UniqueName: \"kubernetes.io/projected/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-kube-api-access-srcl6\") pod \"marketplace-operator-79b997595-hw52m\" (UID: \"0aa74baf-fde3-4dad-aef0-7b8b1ae90098\") " pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.816859 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" event={"ID":"21b7f343-d887-4bdf-85c0-9639179e9c56","Type":"ContainerStarted","Data":"3b9102c29ded7f3b1489c588a4b593d3cebe14bc8fa2ee108915c50f56d9c663"} Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.837880 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p88cj\" (UniqueName: \"kubernetes.io/projected/77144fd8-36e8-4c75-ae40-fd3c9bb1a6fd-kube-api-access-p88cj\") pod \"ingress-canary-jnw9r\" (UID: \"77144fd8-36e8-4c75-ae40-fd3c9bb1a6fd\") " pod="openshift-ingress-canary/ingress-canary-jnw9r" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.838621 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:17 crc kubenswrapper[4593]: E0129 11:01:17.840784 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:18.340764562 +0000 UTC m=+144.213798763 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.855124 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: E0129 11:01:17.855648 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:18.355620628 +0000 UTC m=+144.228654819 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.872895 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95lmg\" (UniqueName: \"kubernetes.io/projected/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-kube-api-access-95lmg\") pod \"collect-profiles-29494740-bkdhm\" (UID: \"eef5dc1f-d576-46dd-9de7-2a63c6d4157f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.875777 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xx52v" event={"ID":"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc","Type":"ContainerStarted","Data":"5318c72dab4e60db769bd489cccc03cce121501c49e9c505d3cbc034a7383dd0"} Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.875823 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xx52v" event={"ID":"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc","Type":"ContainerStarted","Data":"dc32442090514fd507db2550fc7ca88aa73610ee15acc127f9a2ee87dfa40516"} Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.876919 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvm7v\" (UniqueName: \"kubernetes.io/projected/8ee0cc5f-ef60-4aac-9a88-dd2a0c767afc-kube-api-access-cvm7v\") pod \"multus-admission-controller-857f4d67dd-rnn8b\" (UID: \"8ee0cc5f-ef60-4aac-9a88-dd2a0c767afc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rnn8b" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.887949 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2cvq\" (UniqueName: \"kubernetes.io/projected/c5d626cc-ab7a-408c-9955-c3fc676a799b-kube-api-access-z2cvq\") pod \"service-ca-9c57cc56f-96whs\" (UID: \"c5d626cc-ab7a-408c-9955-c3fc676a799b\") " pod="openshift-service-ca/service-ca-9c57cc56f-96whs" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.890591 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" event={"ID":"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9","Type":"ContainerStarted","Data":"ed50f82eb21665ad0890e00283aeb85786484b14c6fef7e831ff132d86d798cc"} Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.890651 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" event={"ID":"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9","Type":"ContainerStarted","Data":"b63d0af04b2f51a2972545516629f3571ef5538eed8c38c76235e7ce0ea2c411"} Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.920105 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.933172 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59" event={"ID":"02bd78b0-707f-4422-8b39-bd751a8cdcd6","Type":"ContainerStarted","Data":"8cc34a9f01e6a31bd34bf1aad0256d9170eb730a022e8dc844968e80f0f4d1d1"} Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.937244 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" event={"ID":"bb259eac-6aa7-42d9-883b-2af6b63af4b8","Type":"ContainerStarted","Data":"3d3c29b8d7af237ec93e0cca6239f6206a877a189af80d2749e29b6cadc9b4b0"} Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.950601 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-rnn8b" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.953032 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.955849 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.959399 4593 generic.go:334] "Generic (PLEG): container finished" podID="43e8598d-f86e-425e-8418-bcfb93e3bd63" containerID="e837e36ad5d7e8a69016f9ffac8611b74ac4184f83d4fdd3d146af3a3120a4ce" exitCode=0 Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.959471 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" event={"ID":"43e8598d-f86e-425e-8418-bcfb93e3bd63","Type":"ContainerDied","Data":"e837e36ad5d7e8a69016f9ffac8611b74ac4184f83d4fdd3d146af3a3120a4ce"} Jan 29 11:01:17 crc kubenswrapper[4593]: E0129 11:01:17.959536 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:18.459521943 +0000 UTC m=+144.332556134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.971088 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.974972 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.981173 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8425v" event={"ID":"ccb12507-4eef-467d-885d-982c68807bda","Type":"ContainerStarted","Data":"b2d3338b1514b5c7e9256324d64b1f803fa4ccbc8cc1a14cc26386a3d7708bb8"} Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.981473 4593 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9td98 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.981498 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" podUID="76a22425-a78d-4304-b158-f577c6ef4c4f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.989669 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.013235 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.019032 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-96whs" Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.032747 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.057589 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:18 crc kubenswrapper[4593]: E0129 11:01:18.066496 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:18.566477564 +0000 UTC m=+144.439511815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.082392 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-jnw9r" Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.162282 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:18 crc kubenswrapper[4593]: E0129 11:01:18.162692 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:18.662674343 +0000 UTC m=+144.535708534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.260804 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ftchp"] Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.264506 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:18 crc kubenswrapper[4593]: E0129 11:01:18.264834 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:18.76482238 +0000 UTC m=+144.637856571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.304128 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.313684 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c"] Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.322736 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.322797 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.366089 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:18 crc kubenswrapper[4593]: E0129 11:01:18.366409 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:18.86639428 +0000 UTC m=+144.739428471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.409960 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" podStartSLOduration=122.409940418 podStartE2EDuration="2m2.409940418s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:18.407045807 +0000 UTC m=+144.280079998" watchObservedRunningTime="2026-01-29 11:01:18.409940418 +0000 UTC m=+144.282974609" Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.452848 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-xx52v" podStartSLOduration=122.452829147 podStartE2EDuration="2m2.452829147s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:18.438418944 +0000 UTC m=+144.311453135" watchObservedRunningTime="2026-01-29 11:01:18.452829147 +0000 UTC m=+144.325863338" Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.453419 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fm7cc"] Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.468997 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:18 crc kubenswrapper[4593]: E0129 11:01:18.469554 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:18.969538365 +0000 UTC m=+144.842572556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:18 crc kubenswrapper[4593]: W0129 11:01:18.488734 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode544204e_7186_4a22_a6bf_79a5101af4b6.slice/crio-0d7cf3673b86763198bedf6c07542fda69ead3075260207ea60dca64f8d8ae64 WatchSource:0}: Error finding container 0d7cf3673b86763198bedf6c07542fda69ead3075260207ea60dca64f8d8ae64: Status 404 returned error can't find the container with id 0d7cf3673b86763198bedf6c07542fda69ead3075260207ea60dca64f8d8ae64 Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.518009 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx"] Jan 29 11:01:18 crc kubenswrapper[4593]: W0129 11:01:18.519182 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf0241bd_f637_4b8b_b78a_797549fe5da9.slice/crio-cc948b03dc5861fcf1adda897a33fd0c08a2d15a82e993373c8ea7bd3d78a2b8 WatchSource:0}: Error finding container cc948b03dc5861fcf1adda897a33fd0c08a2d15a82e993373c8ea7bd3d78a2b8: Status 404 returned error can't find the container with id cc948b03dc5861fcf1adda897a33fd0c08a2d15a82e993373c8ea7bd3d78a2b8 Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.536578 4593 csr.go:261] certificate signing request csr-gwdhb is approved, waiting to be issued Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.545467 4593 csr.go:257] certificate signing request csr-gwdhb is issued Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.546859 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-j7hr6"] Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.579090 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:18 crc kubenswrapper[4593]: E0129 11:01:18.579440 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:19.079423847 +0000 UTC m=+144.952458038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.594059 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" podStartSLOduration=122.594043425 podStartE2EDuration="2m2.594043425s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:18.560449076 +0000 UTC m=+144.433483267" watchObservedRunningTime="2026-01-29 11:01:18.594043425 +0000 UTC m=+144.467077606" Jan 29 11:01:18 crc kubenswrapper[4593]: W0129 11:01:18.598851 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51f11901_9a27_4368_9e6d_9ae05692c942.slice/crio-e14737acfefe545c91d700d01b1615a6ac33df9f296aba9ce0bd95f1608bda2f WatchSource:0}: Error finding container e14737acfefe545c91d700d01b1615a6ac33df9f296aba9ce0bd95f1608bda2f: Status 404 returned error can't find the container with id e14737acfefe545c91d700d01b1615a6ac33df9f296aba9ce0bd95f1608bda2f Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.680097 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922"] Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.682079 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:18 crc kubenswrapper[4593]: E0129 11:01:18.682504 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:19.182491739 +0000 UTC m=+145.055525930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:18 crc kubenswrapper[4593]: W0129 11:01:18.731723 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d8d97d7_c0b0_4b84_90a2_42e4c49f9d50.slice/crio-927a09ca9372efb96eca4614820ae2506ca04717e577b1311b75d1ad189f9b1f WatchSource:0}: Error finding container 927a09ca9372efb96eca4614820ae2506ca04717e577b1311b75d1ad189f9b1f: Status 404 returned error can't find the container with id 927a09ca9372efb96eca4614820ae2506ca04717e577b1311b75d1ad189f9b1f Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.784271 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:18 crc kubenswrapper[4593]: E0129 11:01:18.784759 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:19.284733648 +0000 UTC m=+145.157767869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.791038 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k" podStartSLOduration=122.791018833 podStartE2EDuration="2m2.791018833s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:18.772358832 +0000 UTC m=+144.645393013" watchObservedRunningTime="2026-01-29 11:01:18.791018833 +0000 UTC m=+144.664053024" Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.802231 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-l64wd"] Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.846037 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gz9wd"] Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.848511 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k"] Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.858346 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj"] Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.886353 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:18 crc kubenswrapper[4593]: E0129 11:01:18.886848 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:19.386832403 +0000 UTC m=+145.259866594 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.929531 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf5p2"] Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.993215 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:18 crc kubenswrapper[4593]: E0129 11:01:18.993728 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:19.493708591 +0000 UTC m=+145.366742782 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:19 crc kubenswrapper[4593]: W0129 11:01:19.052179 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8246045d_6937_4d02_b488_24bcf2eec4bf.slice/crio-e06aae70c4e861c81c7cd4182c8eef519e279fc1658ede262cd444363e159ecc WatchSource:0}: Error finding container e06aae70c4e861c81c7cd4182c8eef519e279fc1658ede262cd444363e159ecc: Status 404 returned error can't find the container with id e06aae70c4e861c81c7cd4182c8eef519e279fc1658ede262cd444363e159ecc Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.094544 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:19 crc kubenswrapper[4593]: E0129 11:01:19.095096 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:19.595079085 +0000 UTC m=+145.468113286 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.128881 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" event={"ID":"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50","Type":"ContainerStarted","Data":"927a09ca9372efb96eca4614820ae2506ca04717e577b1311b75d1ad189f9b1f"} Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.143215 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58" podStartSLOduration=123.143199271 podStartE2EDuration="2m3.143199271s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:19.127620995 +0000 UTC m=+145.000655196" watchObservedRunningTime="2026-01-29 11:01:19.143199271 +0000 UTC m=+145.016233462" Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.192994 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" event={"ID":"f0ee22f5-d5c3-4686-ab5d-53223d05bef6","Type":"ContainerStarted","Data":"57615d8c750f59fb2bc9b3523ad3ef2bc11b07e4737982f42eb88c8e6632c6dd"} Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.195778 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:19 crc kubenswrapper[4593]: E0129 11:01:19.196158 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:19.69614316 +0000 UTC m=+145.569177351 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.247156 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fm7cc" event={"ID":"661d5765-a5d7-4cb4-87b9-284f36dc385e","Type":"ContainerStarted","Data":"632716971daf9c9bb8743ed272d65cb7d1924ec899b8897d893f85f1a7895f47"} Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.249045 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" event={"ID":"e544204e-7186-4a22-a6bf-79a5101af4b6","Type":"ContainerStarted","Data":"0d7cf3673b86763198bedf6c07542fda69ead3075260207ea60dca64f8d8ae64"} Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.250266 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" event={"ID":"51f11901-9a27-4368-9e6d-9ae05692c942","Type":"ContainerStarted","Data":"e14737acfefe545c91d700d01b1615a6ac33df9f296aba9ce0bd95f1608bda2f"} Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.302108 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:19 crc kubenswrapper[4593]: E0129 11:01:19.302397 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:19.802386511 +0000 UTC m=+145.675420702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.320091 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922" event={"ID":"915745e3-1528-4d5f-84a6-001471123924","Type":"ContainerStarted","Data":"1ad4a0096e5f894db159a22d01e6b99d48da341bc0b421d722d046dfaeb1e15f"} Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.357870 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:19 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:19 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:19 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.357929 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.363043 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-vbsqg" event={"ID":"bf0241bd-f637-4b8b-b78a-797549fe5da9","Type":"ContainerStarted","Data":"cc948b03dc5861fcf1adda897a33fd0c08a2d15a82e993373c8ea7bd3d78a2b8"} Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.404193 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:19 crc kubenswrapper[4593]: E0129 11:01:19.404381 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:19.904358873 +0000 UTC m=+145.777393054 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.404736 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:19 crc kubenswrapper[4593]: E0129 11:01:19.405922 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:19.905907346 +0000 UTC m=+145.778941587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.428096 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn"] Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.512205 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:19 crc kubenswrapper[4593]: E0129 11:01:19.512492 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:20.012477896 +0000 UTC m=+145.885512087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.547289 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz"] Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.552126 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-29 10:56:18 +0000 UTC, rotation deadline is 2026-10-23 11:35:29.510602324 +0000 UTC Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.553578 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6408h34m9.957028899s for next certificate rotation Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.614755 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:19 crc kubenswrapper[4593]: E0129 11:01:19.615211 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:20.115195738 +0000 UTC m=+145.988229929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.719140 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:19 crc kubenswrapper[4593]: E0129 11:01:19.720088 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:20.220068901 +0000 UTC m=+146.093103102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.761856 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" podStartSLOduration=123.761828438 podStartE2EDuration="2m3.761828438s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:19.760589124 +0000 UTC m=+145.633623335" watchObservedRunningTime="2026-01-29 11:01:19.761828438 +0000 UTC m=+145.634862629" Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.763330 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" podStartSLOduration=123.763322831 podStartE2EDuration="2m3.763322831s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:19.723537367 +0000 UTC m=+145.596571558" watchObservedRunningTime="2026-01-29 11:01:19.763322831 +0000 UTC m=+145.636357022" Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.822982 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:19 crc kubenswrapper[4593]: E0129 11:01:19.823390 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:20.323379379 +0000 UTC m=+146.196413570 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:19 crc kubenswrapper[4593]: W0129 11:01:19.840027 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6910728e_feba_4826_8447_11f4cf860c30.slice/crio-6d689f57c66e3de55ef51591b31cd1492ce48f1961be6ccc3e60f3fed038d637 WatchSource:0}: Error finding container 6d689f57c66e3de55ef51591b31cd1492ce48f1961be6ccc3e60f3fed038d637: Status 404 returned error can't find the container with id 6d689f57c66e3de55ef51591b31cd1492ce48f1961be6ccc3e60f3fed038d637 Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.925357 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:19 crc kubenswrapper[4593]: E0129 11:01:19.928209 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:20.42818993 +0000 UTC m=+146.301224121 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.967368 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8b552"] Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.986392 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm"] Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.027642 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:20 crc kubenswrapper[4593]: E0129 11:01:20.028359 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:20.528343281 +0000 UTC m=+146.401377472 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.129041 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:20 crc kubenswrapper[4593]: E0129 11:01:20.129434 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:20.629414577 +0000 UTC m=+146.502448768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.139985 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zv27c"] Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.235482 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:20 crc kubenswrapper[4593]: E0129 11:01:20.235875 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:20.735859203 +0000 UTC m=+146.608893394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:20 crc kubenswrapper[4593]: W0129 11:01:20.245451 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeef5dc1f_d576_46dd_9de7_2a63c6d4157f.slice/crio-6e9375090f14ff59ea759c16737f4727f94c4e541ab0f6f5ae3c71787d1187c5 WatchSource:0}: Error finding container 6e9375090f14ff59ea759c16737f4727f94c4e541ab0f6f5ae3c71787d1187c5: Status 404 returned error can't find the container with id 6e9375090f14ff59ea759c16737f4727f94c4e541ab0f6f5ae3c71787d1187c5 Jan 29 11:01:20 crc kubenswrapper[4593]: W0129 11:01:20.258471 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9136490_ddbf_4318_91c6_e73d74e7b599.slice/crio-4b018f89a1cca4acd2d0a8ba795cf33e5152a1661724c8be5d8624a6a90f3b3c WatchSource:0}: Error finding container 4b018f89a1cca4acd2d0a8ba795cf33e5152a1661724c8be5d8624a6a90f3b3c: Status 404 returned error can't find the container with id 4b018f89a1cca4acd2d0a8ba795cf33e5152a1661724c8be5d8624a6a90f3b3c Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.313150 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:20 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:20 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:20 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.313203 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.336495 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:20 crc kubenswrapper[4593]: E0129 11:01:20.337079 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:20.837060732 +0000 UTC m=+146.710094923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.411179 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" event={"ID":"6910728e-feba-4826-8447-11f4cf860c30","Type":"ContainerStarted","Data":"6d689f57c66e3de55ef51591b31cd1492ce48f1961be6ccc3e60f3fed038d637"} Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.433023 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq"] Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.438782 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:20 crc kubenswrapper[4593]: E0129 11:01:20.439200 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:20.939187158 +0000 UTC m=+146.812221349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.491019 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59" event={"ID":"02bd78b0-707f-4422-8b39-bd751a8cdcd6","Type":"ContainerStarted","Data":"7c1c7b513147e3ac358e52d2182023600324bd4cc4d0739091fb5509c46818eb"} Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.495691 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hw52m"] Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.495825 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4"] Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.508969 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" event={"ID":"8246045d-6937-4d02-b488-24bcf2eec4bf","Type":"ContainerStarted","Data":"e06aae70c4e861c81c7cd4182c8eef519e279fc1658ede262cd444363e159ecc"} Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.530473 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8b552" event={"ID":"58e36a23-974a-4afd-b226-bb194d489cf0","Type":"ContainerStarted","Data":"9015e523f1f3cc972f8aef7fc501a0654bbed5a650252aeacf03ee67aea0e98f"} Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.535527 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" event={"ID":"43e8598d-f86e-425e-8418-bcfb93e3bd63","Type":"ContainerStarted","Data":"f3783d891e0881e705c422a22425dc329851be8b69b4a137cddd1be32a52cace"} Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.536310 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.539286 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr"] Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.546541 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:20 crc kubenswrapper[4593]: E0129 11:01:20.547007 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:21.046992252 +0000 UTC m=+146.920026443 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.555904 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" event={"ID":"eef5dc1f-d576-46dd-9de7-2a63c6d4157f","Type":"ContainerStarted","Data":"6e9375090f14ff59ea759c16737f4727f94c4e541ab0f6f5ae3c71787d1187c5"} Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.584905 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj"] Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.585552 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" podStartSLOduration=124.585534391 podStartE2EDuration="2m4.585534391s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:20.574145462 +0000 UTC m=+146.447179653" watchObservedRunningTime="2026-01-29 11:01:20.585534391 +0000 UTC m=+146.458568582" Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.614714 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-29j27"] Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.625303 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8425v" event={"ID":"ccb12507-4eef-467d-885d-982c68807bda","Type":"ContainerStarted","Data":"479ab71a20268cace33237c302625fff890b4d521372542cf861c6e0b4faad5f"} Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.647561 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:20 crc kubenswrapper[4593]: E0129 11:01:20.647940 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:21.147925765 +0000 UTC m=+147.020959946 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.672055 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-8425v" podStartSLOduration=124.672017039 podStartE2EDuration="2m4.672017039s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:20.669961701 +0000 UTC m=+146.542995892" watchObservedRunningTime="2026-01-29 11:01:20.672017039 +0000 UTC m=+146.545051230" Jan 29 11:01:20 crc kubenswrapper[4593]: W0129 11:01:20.676788 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod719f2fcb_45e2_4600_82d9_fbf4263201a2.slice/crio-c908198ec5167b59b9e5cb5f2ee7a3101c2d985cc58e6f004c76055b3b344767 WatchSource:0}: Error finding container c908198ec5167b59b9e5cb5f2ee7a3101c2d985cc58e6f004c76055b3b344767: Status 404 returned error can't find the container with id c908198ec5167b59b9e5cb5f2ee7a3101c2d985cc58e6f004c76055b3b344767 Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.749572 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.752515 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj" event={"ID":"1c91d49f-a382-4279-91c7-a43b3f1e071e","Type":"ContainerStarted","Data":"b29bfbac452a594f19138086c3d449a57600658f19ecd7acbbac7f7c3c50e774"} Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.752559 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj" event={"ID":"1c91d49f-a382-4279-91c7-a43b3f1e071e","Type":"ContainerStarted","Data":"e642ebf4eca78625c6a6c2f89ebbe064cddcf67c3319f8518e67ac8783036146"} Jan 29 11:01:20 crc kubenswrapper[4593]: E0129 11:01:20.753544 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:21.253512748 +0000 UTC m=+147.126546959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:20 crc kubenswrapper[4593]: W0129 11:01:20.754730 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0aa74baf_fde3_4dad_aef0_7b8b1ae90098.slice/crio-b58de0681837cbb0473d918da193d9a2ae22eb516c0709127c7bbdd54537d3ef WatchSource:0}: Error finding container b58de0681837cbb0473d918da193d9a2ae22eb516c0709127c7bbdd54537d3ef: Status 404 returned error can't find the container with id b58de0681837cbb0473d918da193d9a2ae22eb516c0709127c7bbdd54537d3ef Jan 29 11:01:20 crc kubenswrapper[4593]: W0129 11:01:20.779197 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a7ffb2d_39e9_426f_9364_ebe193a5adc8.slice/crio-98624fdc0d26251d72edb6c9aa0bf22ff9b8dc38fec1822028fba9395ab4cb63 WatchSource:0}: Error finding container 98624fdc0d26251d72edb6c9aa0bf22ff9b8dc38fec1822028fba9395ab4cb63: Status 404 returned error can't find the container with id 98624fdc0d26251d72edb6c9aa0bf22ff9b8dc38fec1822028fba9395ab4cb63 Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.779499 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" event={"ID":"51f11901-9a27-4368-9e6d-9ae05692c942","Type":"ContainerStarted","Data":"098bab81a052020df3698907802477042efe83403d7ec4b65346f8eb610613b2"} Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.815559 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" event={"ID":"dc1056e0-74e9-4be8-bcdf-92604e23a2e1","Type":"ContainerStarted","Data":"ab516fca4f079c481a8a89388efe9a298f131911c8e7d09547e623e04e04cc44"} Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.830024 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-vbsqg" event={"ID":"bf0241bd-f637-4b8b-b78a-797549fe5da9","Type":"ContainerStarted","Data":"23eb963bc3a50dcc87c540c0aeac1e86811881b9582d9623d3e21dbf881ea281"} Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.834594 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" podStartSLOduration=124.834575024 podStartE2EDuration="2m4.834575024s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:20.823206357 +0000 UTC m=+146.696240548" watchObservedRunningTime="2026-01-29 11:01:20.834575024 +0000 UTC m=+146.707609215" Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.854234 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:20 crc kubenswrapper[4593]: E0129 11:01:20.854556 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:21.354542192 +0000 UTC m=+147.227576393 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.868391 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf5p2" event={"ID":"9bce548b-2c64-4ac5-a797-979de4cf7656","Type":"ContainerStarted","Data":"6f3fa8227dd1a01d4a4ae4526929ee8a68020cdbbce4d38f1e42291cf196886a"} Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.918166 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-vbsqg" podStartSLOduration=6.91814575 podStartE2EDuration="6.91814575s" podCreationTimestamp="2026-01-29 11:01:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:20.89236151 +0000 UTC m=+146.765395711" watchObservedRunningTime="2026-01-29 11:01:20.91814575 +0000 UTC m=+146.791179941" Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.920186 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-96whs"] Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.927537 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fm7cc" event={"ID":"661d5765-a5d7-4cb4-87b9-284f36dc385e","Type":"ContainerStarted","Data":"9f1d52299e8187dd965ebff851605459dcfcb9666a7a05c92d57f944764e3718"} Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.928234 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-fm7cc" Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.932398 4593 patch_prober.go:28] interesting pod/console-operator-58897d9998-fm7cc container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.932467 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fm7cc" podUID="661d5765-a5d7-4cb4-87b9-284f36dc385e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.964048 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.964393 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.965065 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:20 crc kubenswrapper[4593]: E0129 11:01:20.966861 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:21.466839202 +0000 UTC m=+147.339873393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.986499 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-l64wd" event={"ID":"edf60cff-ba6c-450f-bcec-7b14d7513120","Type":"ContainerStarted","Data":"a1b0bbf083dd4815c2b6a4028f68ba1230f78cefe6ada0632169815e19d3d52b"} Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.993274 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h"] Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.028705 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k" event={"ID":"478971f0-c97c-4eb1-86d2-50af06b8aafc","Type":"ContainerStarted","Data":"7416395569cb99fb4a8e8bc9561297a2a31c9aae9116f459c305e399f5bc950c"} Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.030006 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-fm7cc" podStartSLOduration=125.029979867 podStartE2EDuration="2m5.029979867s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:21.022446897 +0000 UTC m=+146.895481098" watchObservedRunningTime="2026-01-29 11:01:21.029979867 +0000 UTC m=+146.903014078" Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.072072 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-jnw9r"] Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.074242 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:21 crc kubenswrapper[4593]: E0129 11:01:21.074575 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:21.574557094 +0000 UTC m=+147.447591285 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:21 crc kubenswrapper[4593]: W0129 11:01:21.169910 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ee0cc5f_ef60_4aac_9a88_dd2a0c767afc.slice/crio-fc6c20bf52aee2139b8d3d882bb7de41626a8b84520d0a5e1b6cb4ffba81ed35 WatchSource:0}: Error finding container fc6c20bf52aee2139b8d3d882bb7de41626a8b84520d0a5e1b6cb4ffba81ed35: Status 404 returned error can't find the container with id fc6c20bf52aee2139b8d3d882bb7de41626a8b84520d0a5e1b6cb4ffba81ed35 Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.181198 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" event={"ID":"bb259eac-6aa7-42d9-883b-2af6b63af4b8","Type":"ContainerStarted","Data":"0d1f1da0ccfcb7023e9050ac93a5de5cd880847710176ac0ddad52f400549a8f"} Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.181244 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zv27c" event={"ID":"e9136490-ddbf-4318-91c6-e73d74e7b599","Type":"ContainerStarted","Data":"4b018f89a1cca4acd2d0a8ba795cf33e5152a1661724c8be5d8624a6a90f3b3c"} Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.181261 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rnn8b"] Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.181280 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-t7wn4"] Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.181358 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.186028 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:21 crc kubenswrapper[4593]: E0129 11:01:21.186927 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:21.686876474 +0000 UTC m=+147.559910665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.193113 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:21 crc kubenswrapper[4593]: E0129 11:01:21.193448 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:21.693432608 +0000 UTC m=+147.566466799 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.241950 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" podStartSLOduration=125.241924034 podStartE2EDuration="2m5.241924034s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:21.176831244 +0000 UTC m=+147.049865435" watchObservedRunningTime="2026-01-29 11:01:21.241924034 +0000 UTC m=+147.114958225" Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.259960 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s"] Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.277564 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp"] Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.296137 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:21 crc kubenswrapper[4593]: E0129 11:01:21.313000 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:21.81296706 +0000 UTC m=+147.686001331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.323941 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:21 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:21 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:21 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.324005 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.413859 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:21 crc kubenswrapper[4593]: E0129 11:01:21.414228 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:21.914210672 +0000 UTC m=+147.787244863 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:21 crc kubenswrapper[4593]: W0129 11:01:21.423858 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59084a0c_807b_47c9_b905_6e07817bcb89.slice/crio-b046d49d8fbd3ff0f6f39567887bd3b141e18ac5c2409dd80ef9787be72f9612 WatchSource:0}: Error finding container b046d49d8fbd3ff0f6f39567887bd3b141e18ac5c2409dd80ef9787be72f9612: Status 404 returned error can't find the container with id b046d49d8fbd3ff0f6f39567887bd3b141e18ac5c2409dd80ef9787be72f9612 Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.518341 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:21 crc kubenswrapper[4593]: E0129 11:01:21.518659 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:22.018624061 +0000 UTC m=+147.891658252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.621446 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:21 crc kubenswrapper[4593]: E0129 11:01:21.621862 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:22.121848937 +0000 UTC m=+147.994883128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.723390 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:21 crc kubenswrapper[4593]: E0129 11:01:21.723546 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:22.22351713 +0000 UTC m=+148.096551331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.723686 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:21 crc kubenswrapper[4593]: E0129 11:01:21.724066 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:22.224055665 +0000 UTC m=+148.097089856 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.824689 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:21 crc kubenswrapper[4593]: E0129 11:01:21.825267 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:22.325248434 +0000 UTC m=+148.198282625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.925850 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:21 crc kubenswrapper[4593]: E0129 11:01:21.926289 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:22.426273909 +0000 UTC m=+148.299308100 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.027028 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:22 crc kubenswrapper[4593]: E0129 11:01:22.027614 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:22.527593163 +0000 UTC m=+148.400627364 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.128914 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:22 crc kubenswrapper[4593]: E0129 11:01:22.129823 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:22.629808461 +0000 UTC m=+148.502842652 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.224739 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" event={"ID":"0aa74baf-fde3-4dad-aef0-7b8b1ae90098","Type":"ContainerStarted","Data":"b58de0681837cbb0473d918da193d9a2ae22eb516c0709127c7bbdd54537d3ef"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.232473 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:22 crc kubenswrapper[4593]: E0129 11:01:22.233151 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:22.7331307 +0000 UTC m=+148.606164891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.234805 4593 patch_prober.go:28] interesting pod/apiserver-76f77b778f-m9zzn container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 29 11:01:22 crc kubenswrapper[4593]: [+]log ok Jan 29 11:01:22 crc kubenswrapper[4593]: [+]etcd ok Jan 29 11:01:22 crc kubenswrapper[4593]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 29 11:01:22 crc kubenswrapper[4593]: [+]poststarthook/generic-apiserver-start-informers ok Jan 29 11:01:22 crc kubenswrapper[4593]: [+]poststarthook/max-in-flight-filter ok Jan 29 11:01:22 crc kubenswrapper[4593]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 29 11:01:22 crc kubenswrapper[4593]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 29 11:01:22 crc kubenswrapper[4593]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 29 11:01:22 crc kubenswrapper[4593]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 29 11:01:22 crc kubenswrapper[4593]: [+]poststarthook/project.openshift.io-projectcache ok Jan 29 11:01:22 crc kubenswrapper[4593]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 29 11:01:22 crc kubenswrapper[4593]: [+]poststarthook/openshift.io-startinformers ok Jan 29 11:01:22 crc kubenswrapper[4593]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 29 11:01:22 crc kubenswrapper[4593]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 29 11:01:22 crc kubenswrapper[4593]: livez check failed Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.234894 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" podUID="dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.273146 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-l64wd" event={"ID":"edf60cff-ba6c-450f-bcec-7b14d7513120","Type":"ContainerStarted","Data":"4e90e15f4916d81ad815c84a464d7a3154554b360a2bfa8b0b55d27cfcb3731d"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.304626 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:22 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:22 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:22 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.304697 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.319939 4593 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-g5zq7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.319996 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" podUID="43e8598d-f86e-425e-8418-bcfb93e3bd63" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.320113 4593 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-g5zq7 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.320174 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" podUID="43e8598d-f86e-425e-8418-bcfb93e3bd63" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.328676 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" event={"ID":"28ad6acc-fb5e-4d71-9f36-492c3b1262d2","Type":"ContainerStarted","Data":"165c5378079b51f54c98509e01a52658388b046b5c4394baa703f61a0c8ec9f3"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.338592 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:22 crc kubenswrapper[4593]: E0129 11:01:22.339348 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:22.839333769 +0000 UTC m=+148.712367970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.378626 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" event={"ID":"bb259eac-6aa7-42d9-883b-2af6b63af4b8","Type":"ContainerStarted","Data":"2776f3c70cbb7ede6321a7c87f7a751134696b83c8b00c05deb9a968a7c91fe7"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.425811 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj" event={"ID":"1c91d49f-a382-4279-91c7-a43b3f1e071e","Type":"ContainerStarted","Data":"ac40e4222252a73877076cb3072f20d9c0a99b6b89d8444a35c6b1355a13ded7"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.440227 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:22 crc kubenswrapper[4593]: E0129 11:01:22.441732 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:22.941709531 +0000 UTC m=+148.814743732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.473227 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-t7wn4" event={"ID":"fa5b3597-636e-4cf0-ad99-755378e23867","Type":"ContainerStarted","Data":"696cf1720196cf57c4da0b337c830ea79045db65f0636c90b3de8b14528e9492"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.479043 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq" event={"ID":"fae65f9f-a5ea-442a-8c78-aa650d330c4d","Type":"ContainerStarted","Data":"e217b974fa4632683cf1c5b577dcf980fe12d9e389c20e4138bfb225df22cfad"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.479090 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq" event={"ID":"fae65f9f-a5ea-442a-8c78-aa650d330c4d","Type":"ContainerStarted","Data":"33e285a80680f60c5fce9274227bd82a78afd2a3d765617f063b4e13a54188f7"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.507584 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" event={"ID":"e544204e-7186-4a22-a6bf-79a5101af4b6","Type":"ContainerStarted","Data":"0951708a49a18c39b5089e8701a82e83976042f4ab61f945ea72ff61a2c3931c"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.508557 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.510561 4593 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-ftchp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.510624 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" podUID="e544204e-7186-4a22-a6bf-79a5101af4b6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.528047 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj" podStartSLOduration=126.528030825 podStartE2EDuration="2m6.528030825s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:22.485054043 +0000 UTC m=+148.358088234" watchObservedRunningTime="2026-01-29 11:01:22.528030825 +0000 UTC m=+148.401065026" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.545352 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" event={"ID":"6910728e-feba-4826-8447-11f4cf860c30","Type":"ContainerStarted","Data":"4450c9f92b23f4d5b82f78ff23480c9752cbd93f501d59b07b7c544108a5c382"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.545425 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:22 crc kubenswrapper[4593]: E0129 11:01:22.545781 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:23.045768031 +0000 UTC m=+148.918802222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.545810 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.547140 4593 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-g9wvz container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.547175 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" podUID="6910728e-feba-4826-8447-11f4cf860c30" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.570394 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4" event={"ID":"65e50d23-1adc-4462-9424-1d2157c2ff93","Type":"ContainerStarted","Data":"a1bb3a1d7e0f1f5e2ad1f4c3f6120cba74cc973779254be9cf207ce79d3c9f72"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.570430 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4" event={"ID":"65e50d23-1adc-4462-9424-1d2157c2ff93","Type":"ContainerStarted","Data":"3dae87d680ec6d5acf3897bbd711f86697fcbfb6473637dee099b73bcd2b56ce"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.598001 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8b552" event={"ID":"58e36a23-974a-4afd-b226-bb194d489cf0","Type":"ContainerStarted","Data":"0e7485990073d9196a875c8aca464ea8d3b4af7bf554594743fc5f93b3663142"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.614785 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" event={"ID":"59084a0c-807b-47c9-b905-6e07817bcb89","Type":"ContainerStarted","Data":"b046d49d8fbd3ff0f6f39567887bd3b141e18ac5c2409dd80ef9787be72f9612"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.626415 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq" podStartSLOduration=126.626396955 podStartE2EDuration="2m6.626396955s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:22.526681207 +0000 UTC m=+148.399715418" watchObservedRunningTime="2026-01-29 11:01:22.626396955 +0000 UTC m=+148.499431146" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.626805 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" podStartSLOduration=126.626796467 podStartE2EDuration="2m6.626796467s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:22.622503137 +0000 UTC m=+148.495537338" watchObservedRunningTime="2026-01-29 11:01:22.626796467 +0000 UTC m=+148.499830658" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.647613 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:22 crc kubenswrapper[4593]: E0129 11:01:22.648188 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:23.148168255 +0000 UTC m=+149.021202456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.648402 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:22 crc kubenswrapper[4593]: E0129 11:01:22.649193 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:23.149182403 +0000 UTC m=+149.022216594 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.665691 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr" event={"ID":"719f2fcb-45e2-4600-82d9-fbf4263201a2","Type":"ContainerStarted","Data":"5cbe6c6ceefbd3454528d1631e16a77426547464c0e0bf6c69c03de9f7884459"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.665753 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr" event={"ID":"719f2fcb-45e2-4600-82d9-fbf4263201a2","Type":"ContainerStarted","Data":"c908198ec5167b59b9e5cb5f2ee7a3101c2d985cc58e6f004c76055b3b344767"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.707749 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-96whs" event={"ID":"c5d626cc-ab7a-408c-9955-c3fc676a799b","Type":"ContainerStarted","Data":"79832379d1bbdea1bf48434932717ca1f0ed0888fea265c0dca3e98ee9699bb2"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.743581 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4" podStartSLOduration=126.743559092 podStartE2EDuration="2m6.743559092s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:22.679511811 +0000 UTC m=+148.552546012" watchObservedRunningTime="2026-01-29 11:01:22.743559092 +0000 UTC m=+148.616593283" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.746567 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-jnw9r" event={"ID":"77144fd8-36e8-4c75-ae40-fd3c9bb1a6fd","Type":"ContainerStarted","Data":"aca853c9026fbd3692d13110a368138ee932936b640d1d1cf17bfe05b9af1428"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.750340 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:22 crc kubenswrapper[4593]: E0129 11:01:22.750807 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:23.250788194 +0000 UTC m=+149.123822385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.790403 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" podStartSLOduration=126.790377511 podStartE2EDuration="2m6.790377511s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:22.746439532 +0000 UTC m=+148.619473733" watchObservedRunningTime="2026-01-29 11:01:22.790377511 +0000 UTC m=+148.663411702" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.817953 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rnn8b" event={"ID":"8ee0cc5f-ef60-4aac-9a88-dd2a0c767afc","Type":"ContainerStarted","Data":"fc6c20bf52aee2139b8d3d882bb7de41626a8b84520d0a5e1b6cb4ffba81ed35"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.843576 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k" event={"ID":"478971f0-c97c-4eb1-86d2-50af06b8aafc","Type":"ContainerStarted","Data":"a2d60d5192d241923530c8bd5ed6cf2e230b686c0266f129683e8144da6ca5c5"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.850319 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-96whs" podStartSLOduration=126.850296766 podStartE2EDuration="2m6.850296766s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:22.791190524 +0000 UTC m=+148.664224725" watchObservedRunningTime="2026-01-29 11:01:22.850296766 +0000 UTC m=+148.723330967" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.850436 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-jnw9r" podStartSLOduration=8.85043046 podStartE2EDuration="8.85043046s" podCreationTimestamp="2026-01-29 11:01:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:22.849785432 +0000 UTC m=+148.722819633" watchObservedRunningTime="2026-01-29 11:01:22.85043046 +0000 UTC m=+148.723464661" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.851784 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:22 crc kubenswrapper[4593]: E0129 11:01:22.852209 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:23.3521935 +0000 UTC m=+149.225227701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.859141 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" event={"ID":"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50","Type":"ContainerStarted","Data":"e3eb68f3a20819414457d7b687abdfc99613007b340ae9126017cf556fad2b6d"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.878116 4593 generic.go:334] "Generic (PLEG): container finished" podID="f0ee22f5-d5c3-4686-ab5d-53223d05bef6" containerID="3078c01972d813c506b6d8519d9aab9bc964fd78d5df8d30ae175e731ae9564a" exitCode=0 Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.878225 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" event={"ID":"f0ee22f5-d5c3-4686-ab5d-53223d05bef6","Type":"ContainerDied","Data":"3078c01972d813c506b6d8519d9aab9bc964fd78d5df8d30ae175e731ae9564a"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.889166 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" event={"ID":"e7651ef0-a985-4314-a20a-7103624a257a","Type":"ContainerStarted","Data":"6963add583c8c165e41d2d04f97fa22d3b7c12081e62b89283d732699501fa99"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.901907 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" event={"ID":"dc1056e0-74e9-4be8-bcdf-92604e23a2e1","Type":"ContainerStarted","Data":"3cf3a2c7fa5ee0305b02c53e31347ce727cc5996fa01d68d0c1b7a391a402f94"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.906793 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj" event={"ID":"5d8acfc6-0334-4294-8dd6-c3091ebb69d3","Type":"ContainerStarted","Data":"2f48cbda9004fb1cef5670cd7c470182d9032a02edc19f790e551c7da3e265f7"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.916506 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf5p2" event={"ID":"9bce548b-2c64-4ac5-a797-979de4cf7656","Type":"ContainerStarted","Data":"f2946137c5275477291e1d53969eabd7b8bdca8a4c5b713bf1318a819d020561"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.931961 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-29j27" event={"ID":"0a7ffb2d-39e9-426f-9364-ebe193a5adc8","Type":"ContainerStarted","Data":"f6685d80a88aeb1befbee546db61858e5d87768098d866a6e53fcd487269da65"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.932007 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-29j27" event={"ID":"0a7ffb2d-39e9-426f-9364-ebe193a5adc8","Type":"ContainerStarted","Data":"98624fdc0d26251d72edb6c9aa0bf22ff9b8dc38fec1822028fba9395ab4cb63"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.951515 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k" podStartSLOduration=126.951492446 podStartE2EDuration="2m6.951492446s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:22.8958265 +0000 UTC m=+148.768860701" watchObservedRunningTime="2026-01-29 11:01:22.951492446 +0000 UTC m=+148.824526637" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.952069 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" podStartSLOduration=126.952062712 podStartE2EDuration="2m6.952062712s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:22.937245147 +0000 UTC m=+148.810279338" watchObservedRunningTime="2026-01-29 11:01:22.952062712 +0000 UTC m=+148.825096903" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.953301 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:22 crc kubenswrapper[4593]: E0129 11:01:22.954435 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:23.454419638 +0000 UTC m=+149.327453829 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.012032 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" event={"ID":"eef5dc1f-d576-46dd-9de7-2a63c6d4157f","Type":"ContainerStarted","Data":"a42849f610d885535cd0e60eaaa2528c5e1fd8e251ed36cfc95a9501172d4972"} Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.031161 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922" event={"ID":"915745e3-1528-4d5f-84a6-001471123924","Type":"ContainerStarted","Data":"419886fe23b41f2860852302590cbbe00c425ce1a54ec11e7d5a3c0cfc693830"} Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.127319 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:23 crc kubenswrapper[4593]: E0129 11:01:23.132626 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:23.63261323 +0000 UTC m=+149.505647421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.229212 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:23 crc kubenswrapper[4593]: E0129 11:01:23.230305 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:23.730286051 +0000 UTC m=+149.603320242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.330409 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:23 crc kubenswrapper[4593]: E0129 11:01:23.330705 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:23.830694548 +0000 UTC m=+149.703728739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.382604 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" podStartSLOduration=83.382581359 podStartE2EDuration="1m23.382581359s" podCreationTimestamp="2026-01-29 11:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:23.379893595 +0000 UTC m=+149.252927806" watchObservedRunningTime="2026-01-29 11:01:23.382581359 +0000 UTC m=+149.255615550" Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.383212 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:23 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:23 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:23 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.383464 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.403053 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" event={"ID":"8246045d-6937-4d02-b488-24bcf2eec4bf","Type":"ContainerStarted","Data":"35164e54a60485a7dbe013cf824db9b1209cd122707ac9c6ccc1e471f29e4abb"} Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.434228 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:23 crc kubenswrapper[4593]: E0129 11:01:23.435513 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:23.935492479 +0000 UTC m=+149.808526670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.569879 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:23 crc kubenswrapper[4593]: E0129 11:01:23.570343 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:24.0703283 +0000 UTC m=+149.943362491 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.689255 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:23 crc kubenswrapper[4593]: E0129 11:01:23.689624 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:24.189607495 +0000 UTC m=+150.062641686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.874247 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.874323 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.874353 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.874394 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.886969 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:01:23 crc kubenswrapper[4593]: E0129 11:01:23.887560 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:24.387523069 +0000 UTC m=+150.260557270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.918301 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.965534 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.976097 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.976329 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:01:23 crc kubenswrapper[4593]: E0129 11:01:23.976791 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:24.476762004 +0000 UTC m=+150.349796195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:23.980610 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:23.994354 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" podStartSLOduration=127.994338835 podStartE2EDuration="2m7.994338835s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:23.869616498 +0000 UTC m=+149.742650699" watchObservedRunningTime="2026-01-29 11:01:23.994338835 +0000 UTC m=+149.867373016" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.085442 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:24 crc kubenswrapper[4593]: E0129 11:01:24.086065 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:24.58604858 +0000 UTC m=+150.459082771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.088692 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.103900 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.104298 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.146556 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf5p2" podStartSLOduration=128.146539041 podStartE2EDuration="2m8.146539041s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:24.059496027 +0000 UTC m=+149.932530218" watchObservedRunningTime="2026-01-29 11:01:24.146539041 +0000 UTC m=+150.019573232" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.147563 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922" podStartSLOduration=128.14755802 podStartE2EDuration="2m8.14755802s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:24.14614497 +0000 UTC m=+150.019179161" watchObservedRunningTime="2026-01-29 11:01:24.14755802 +0000 UTC m=+150.020592211" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.190054 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:24 crc kubenswrapper[4593]: E0129 11:01:24.190462 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:24.690444848 +0000 UTC m=+150.563479039 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.208838 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59" podStartSLOduration=128.208819593 podStartE2EDuration="2m8.208819593s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:24.205927842 +0000 UTC m=+150.078962033" watchObservedRunningTime="2026-01-29 11:01:24.208819593 +0000 UTC m=+150.081853784" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.209735 4593 patch_prober.go:28] interesting pod/console-operator-58897d9998-fm7cc container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.209792 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fm7cc" podUID="661d5765-a5d7-4cb4-87b9-284f36dc385e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.210452 4593 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-g5zq7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.210491 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" podUID="43e8598d-f86e-425e-8418-bcfb93e3bd63" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.274075 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" podStartSLOduration=128.274061317 podStartE2EDuration="2m8.274061317s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:24.27274877 +0000 UTC m=+150.145782971" watchObservedRunningTime="2026-01-29 11:01:24.274061317 +0000 UTC m=+150.147095508" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.293415 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:24 crc kubenswrapper[4593]: E0129 11:01:24.293696 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:24.793684555 +0000 UTC m=+150.666718746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.392277 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr" event={"ID":"719f2fcb-45e2-4600-82d9-fbf4263201a2","Type":"ContainerStarted","Data":"7dcea9124afbdb503f66212747ce1aa67316de754f6a9cd930f9fb2d93776a2e"} Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.392702 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.395269 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:24 crc kubenswrapper[4593]: E0129 11:01:24.395656 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:24.895642376 +0000 UTC m=+150.768676567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.401953 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:24 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:24 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:24 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.401999 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.492745 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-29j27" event={"ID":"0a7ffb2d-39e9-426f-9364-ebe193a5adc8","Type":"ContainerStarted","Data":"559af5a67e9a8e3d351c94bfa87518901de499103490e8b9d574bd2b89a0accd"} Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.493880 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-29j27" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.494744 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr" podStartSLOduration=128.494720617 podStartE2EDuration="2m8.494720617s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:24.492479984 +0000 UTC m=+150.365514185" watchObservedRunningTime="2026-01-29 11:01:24.494720617 +0000 UTC m=+150.367754808" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.502003 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:24 crc kubenswrapper[4593]: E0129 11:01:24.502318 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:25.002303379 +0000 UTC m=+150.875337570 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.509563 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8b552" event={"ID":"58e36a23-974a-4afd-b226-bb194d489cf0","Type":"ContainerStarted","Data":"46245db333e8ce753863ff1f2b5f45124a4876dbab3c78b453d6395af231093a"} Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.519048 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" event={"ID":"0aa74baf-fde3-4dad-aef0-7b8b1ae90098","Type":"ContainerStarted","Data":"134cb2e4c5ab4b63e76188908744960f17a0602be1969f5d2c5bfb52e5ef0868"} Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.519851 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.521129 4593 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hw52m container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.521165 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" podUID="0aa74baf-fde3-4dad-aef0-7b8b1ae90098" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.617697 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:24 crc kubenswrapper[4593]: E0129 11:01:24.618735 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:25.118715163 +0000 UTC m=+150.991749374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.647977 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-96whs" event={"ID":"c5d626cc-ab7a-408c-9955-c3fc676a799b","Type":"ContainerStarted","Data":"b2a61f706f8d76b4219fdd3d32e3038a72a77fd42e0f2de5afca7281ce2981ae"} Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.706370 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj" event={"ID":"5d8acfc6-0334-4294-8dd6-c3091ebb69d3","Type":"ContainerStarted","Data":"b2542bf8201794dfa409603a8c0db5fbf7fc73188de204efed4719fcb18d34d5"} Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.706448 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj" event={"ID":"5d8acfc6-0334-4294-8dd6-c3091ebb69d3","Type":"ContainerStarted","Data":"bf865df54dd7eea44cdf14782b35051e879ed53d58fecdb5dfaaad1b3e3ed384"} Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.716825 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rnn8b" event={"ID":"8ee0cc5f-ef60-4aac-9a88-dd2a0c767afc","Type":"ContainerStarted","Data":"9beb7a130a2815145e1e969bda1d459ac990a7a62677a18a7abc68a72290e404"} Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.724391 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:24 crc kubenswrapper[4593]: E0129 11:01:24.724741 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:25.224728398 +0000 UTC m=+151.097762579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.725358 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" event={"ID":"59084a0c-807b-47c9-b905-6e07817bcb89","Type":"ContainerStarted","Data":"19cbdf2a6be00984d37346a7d481c69738c6ffaad2afee095f61fbfc754a3a9e"} Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.725971 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.726864 4593 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-zpjgp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.726916 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" podUID="59084a0c-807b-47c9-b905-6e07817bcb89" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.795697 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-jnw9r" event={"ID":"77144fd8-36e8-4c75-ae40-fd3c9bb1a6fd","Type":"ContainerStarted","Data":"142bada782bce23bd62180bbfe11e11d2a8c72b3003d42ba1e6e711468c4cfc6"} Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.798070 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-t7wn4" event={"ID":"fa5b3597-636e-4cf0-ad99-755378e23867","Type":"ContainerStarted","Data":"bf8a806e158e09e0a95b0c27cb110aaca87b007cd6e7c7a21d47ef28df322017"} Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.798467 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-t7wn4" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.800862 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.800902 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.801775 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" event={"ID":"28ad6acc-fb5e-4d71-9f36-492c3b1262d2","Type":"ContainerStarted","Data":"4c14ef3125a849785013eea9aabd2bfaa194654053572172ffc1115dde456e5e"} Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.803382 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.804648 4593 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-vlh9s container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.804718 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" podUID="28ad6acc-fb5e-4d71-9f36-492c3b1262d2" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.807225 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-29j27" podStartSLOduration=10.807209234 podStartE2EDuration="10.807209234s" podCreationTimestamp="2026-01-29 11:01:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:24.671994383 +0000 UTC m=+150.545028584" watchObservedRunningTime="2026-01-29 11:01:24.807209234 +0000 UTC m=+150.680243425" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.823980 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-l64wd" event={"ID":"edf60cff-ba6c-450f-bcec-7b14d7513120","Type":"ContainerStarted","Data":"1df8dddde5fc393630c64585fc5c62998195a9c2d108f207e9e5b63f08bd2f66"} Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.827670 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:24 crc kubenswrapper[4593]: E0129 11:01:24.828759 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:25.328681005 +0000 UTC m=+151.201715236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.845799 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" event={"ID":"dc1056e0-74e9-4be8-bcdf-92604e23a2e1","Type":"ContainerStarted","Data":"0260b49652051fa32135ad0e9703d42815dc7d97c24a46339c85fdc3235e9e35"} Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.859815 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" event={"ID":"e7651ef0-a985-4314-a20a-7103624a257a","Type":"ContainerStarted","Data":"62886bd06e3a980b0a74bd1e6271c27a56ec4c847728204bde958ae5cc1cb533"} Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.859853 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" event={"ID":"e7651ef0-a985-4314-a20a-7103624a257a","Type":"ContainerStarted","Data":"d70bdd805acd59fa83447572f6c4d9bb1cec91d0ad6fe98200f1231bba31ec13"} Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.875689 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8b552" podStartSLOduration=128.875673699 podStartE2EDuration="2m8.875673699s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:24.871861042 +0000 UTC m=+150.744895243" watchObservedRunningTime="2026-01-29 11:01:24.875673699 +0000 UTC m=+150.748707890" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.876065 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" podStartSLOduration=128.876060179 podStartE2EDuration="2m8.876060179s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:24.807895403 +0000 UTC m=+150.680929594" watchObservedRunningTime="2026-01-29 11:01:24.876060179 +0000 UTC m=+150.749094370" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.935654 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.935843 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" Jan 29 11:01:24 crc kubenswrapper[4593]: E0129 11:01:24.936124 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:25.436109269 +0000 UTC m=+151.309143460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:25 crc kubenswrapper[4593]: I0129 11:01:25.038104 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:25 crc kubenswrapper[4593]: E0129 11:01:25.040273 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:25.54025397 +0000 UTC m=+151.413288171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:25 crc kubenswrapper[4593]: I0129 11:01:25.164435 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:25 crc kubenswrapper[4593]: E0129 11:01:25.164857 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:25.664842694 +0000 UTC m=+151.537876885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:25 crc kubenswrapper[4593]: I0129 11:01:25.271967 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:25 crc kubenswrapper[4593]: E0129 11:01:25.272274 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:25.772245148 +0000 UTC m=+151.645279369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:25 crc kubenswrapper[4593]: I0129 11:01:25.332202 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:25 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:25 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:25 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:25 crc kubenswrapper[4593]: I0129 11:01:25.332276 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:25 crc kubenswrapper[4593]: I0129 11:01:25.339351 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" Jan 29 11:01:25 crc kubenswrapper[4593]: I0129 11:01:25.376678 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:25 crc kubenswrapper[4593]: E0129 11:01:25.377770 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:25.877755348 +0000 UTC m=+151.750789539 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:25 crc kubenswrapper[4593]: I0129 11:01:25.386578 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-l64wd" podStartSLOduration=129.386538694 podStartE2EDuration="2m9.386538694s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:24.906237723 +0000 UTC m=+150.779271914" watchObservedRunningTime="2026-01-29 11:01:25.386538694 +0000 UTC m=+151.259572885" Jan 29 11:01:25 crc kubenswrapper[4593]: I0129 11:01:25.477545 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:25 crc kubenswrapper[4593]: E0129 11:01:25.477753 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:25.977726043 +0000 UTC m=+151.850760284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:25 crc kubenswrapper[4593]: I0129 11:01:25.656374 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:25 crc kubenswrapper[4593]: E0129 11:01:25.656707 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:26.156695078 +0000 UTC m=+152.029729259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:25 crc kubenswrapper[4593]: I0129 11:01:25.760243 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:25 crc kubenswrapper[4593]: E0129 11:01:25.760603 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:26.260587623 +0000 UTC m=+152.133621814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:25 crc kubenswrapper[4593]: I0129 11:01:25.990806 4593 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-ftchp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 11:01:25 crc kubenswrapper[4593]: I0129 11:01:25.991130 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" podUID="e544204e-7186-4a22-a6bf-79a5101af4b6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:01:25 crc kubenswrapper[4593]: I0129 11:01:25.993367 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:26 crc kubenswrapper[4593]: E0129 11:01:26.009278 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:26.509253165 +0000 UTC m=+152.382287356 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.021081 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.021481 4593 patch_prober.go:28] interesting pod/apiserver-76f77b778f-m9zzn container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 29 11:01:26 crc kubenswrapper[4593]: [+]log ok Jan 29 11:01:26 crc kubenswrapper[4593]: [+]etcd ok Jan 29 11:01:26 crc kubenswrapper[4593]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 29 11:01:26 crc kubenswrapper[4593]: [+]poststarthook/generic-apiserver-start-informers ok Jan 29 11:01:26 crc kubenswrapper[4593]: [+]poststarthook/max-in-flight-filter ok Jan 29 11:01:26 crc kubenswrapper[4593]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 29 11:01:26 crc kubenswrapper[4593]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 29 11:01:26 crc kubenswrapper[4593]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 29 11:01:26 crc kubenswrapper[4593]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 29 11:01:26 crc kubenswrapper[4593]: [+]poststarthook/project.openshift.io-projectcache ok Jan 29 11:01:26 crc kubenswrapper[4593]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 29 11:01:26 crc kubenswrapper[4593]: [+]poststarthook/openshift.io-startinformers ok Jan 29 11:01:26 crc kubenswrapper[4593]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 29 11:01:26 crc kubenswrapper[4593]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 29 11:01:26 crc kubenswrapper[4593]: livez check failed Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.021535 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" podUID="dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.068215 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rnn8b" event={"ID":"8ee0cc5f-ef60-4aac-9a88-dd2a0c767afc","Type":"ContainerStarted","Data":"ba0d8002780561503b14f07f45dc8c892e8c7cb26b80ec3c0f96e63d823f0f56"} Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.071231 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zv27c" event={"ID":"e9136490-ddbf-4318-91c6-e73d74e7b599","Type":"ContainerStarted","Data":"dca828c12b7e5ed017004f46bc1bc2848909e5feb8de8ea119f476e97237367d"} Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.075578 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.075621 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.077541 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" event={"ID":"f0ee22f5-d5c3-4686-ab5d-53223d05bef6","Type":"ContainerStarted","Data":"41c7ed3294e3b4ac4e494b9a971b8c7eb3897a70618dbc3befe8c9f77d288938"} Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.078189 4593 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-zpjgp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.078245 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" podUID="59084a0c-807b-47c9-b905-6e07817bcb89" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.079167 4593 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hw52m container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.079186 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" podUID="0aa74baf-fde3-4dad-aef0-7b8b1ae90098" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.085269 4593 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-vlh9s container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.085303 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" podUID="28ad6acc-fb5e-4d71-9f36-492c3b1262d2" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.095324 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:26 crc kubenswrapper[4593]: E0129 11:01:26.095737 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:26.595720773 +0000 UTC m=+152.468754964 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.206283 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:26 crc kubenswrapper[4593]: E0129 11:01:26.212531 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:26.7125156 +0000 UTC m=+152.585549791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.315086 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:26 crc kubenswrapper[4593]: E0129 11:01:26.315427 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:26.815407396 +0000 UTC m=+152.688441587 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.398043 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" podStartSLOduration=130.398022826 podStartE2EDuration="2m10.398022826s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:26.134254391 +0000 UTC m=+152.007288582" watchObservedRunningTime="2026-01-29 11:01:26.398022826 +0000 UTC m=+152.271057017" Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.398142 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" podStartSLOduration=130.398138399 podStartE2EDuration="2m10.398138399s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:25.38818077 +0000 UTC m=+151.261214961" watchObservedRunningTime="2026-01-29 11:01:26.398138399 +0000 UTC m=+152.271172590" Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.531037 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:26 crc kubenswrapper[4593]: E0129 11:01:26.531467 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:27.031452707 +0000 UTC m=+152.904486898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.605056 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:26 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:26 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:26 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.605401 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.634535 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:26 crc kubenswrapper[4593]: E0129 11:01:26.634918 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:27.1349027 +0000 UTC m=+153.007936891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:26 crc kubenswrapper[4593]: E0129 11:01:26.647064 4593 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeef5dc1f_d576_46dd_9de7_2a63c6d4157f.slice/crio-a42849f610d885535cd0e60eaaa2528c5e1fd8e251ed36cfc95a9501172d4972.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeef5dc1f_d576_46dd_9de7_2a63c6d4157f.slice/crio-conmon-a42849f610d885535cd0e60eaaa2528c5e1fd8e251ed36cfc95a9501172d4972.scope\": RecentStats: unable to find data in memory cache]" Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.765818 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:26 crc kubenswrapper[4593]: E0129 11:01:26.766197 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:27.266183141 +0000 UTC m=+153.139217332 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.867501 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:26 crc kubenswrapper[4593]: E0129 11:01:26.867872 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:27.367839164 +0000 UTC m=+153.240873355 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.885730 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj" podStartSLOduration=130.885708472 podStartE2EDuration="2m10.885708472s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:26.76938469 +0000 UTC m=+152.642418891" watchObservedRunningTime="2026-01-29 11:01:26.885708472 +0000 UTC m=+152.758742663" Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.886889 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.887593 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.901931 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.924697 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.969727 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:26 crc kubenswrapper[4593]: E0129 11:01:26.970212 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:27.470192075 +0000 UTC m=+153.343226266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.064758 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.075295 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.075585 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54e3f9bd-cf5f-4361-81b2-78571380f93f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"54e3f9bd-cf5f-4361-81b2-78571380f93f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.075659 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54e3f9bd-cf5f-4361-81b2-78571380f93f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"54e3f9bd-cf5f-4361-81b2-78571380f93f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 11:01:27 crc kubenswrapper[4593]: E0129 11:01:27.075770 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:27.575750397 +0000 UTC m=+153.448784578 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.084911 4593 generic.go:334] "Generic (PLEG): container finished" podID="eef5dc1f-d576-46dd-9de7-2a63c6d4157f" containerID="a42849f610d885535cd0e60eaaa2528c5e1fd8e251ed36cfc95a9501172d4972" exitCode=0 Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.086561 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" event={"ID":"eef5dc1f-d576-46dd-9de7-2a63c6d4157f","Type":"ContainerDied","Data":"a42849f610d885535cd0e60eaaa2528c5e1fd8e251ed36cfc95a9501172d4972"} Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.089004 4593 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hw52m container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.089061 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" podUID="0aa74baf-fde3-4dad-aef0-7b8b1ae90098" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.089128 4593 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-vlh9s container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.089147 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" podUID="28ad6acc-fb5e-4d71-9f36-492c3b1262d2" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.094016 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" podStartSLOduration=131.094000747 podStartE2EDuration="2m11.094000747s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:26.979506735 +0000 UTC m=+152.852540936" watchObservedRunningTime="2026-01-29 11:01:27.094000747 +0000 UTC m=+152.967034938" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.098121 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.178023 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.178496 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.178538 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54e3f9bd-cf5f-4361-81b2-78571380f93f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"54e3f9bd-cf5f-4361-81b2-78571380f93f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.178593 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54e3f9bd-cf5f-4361-81b2-78571380f93f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"54e3f9bd-cf5f-4361-81b2-78571380f93f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.179362 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54e3f9bd-cf5f-4361-81b2-78571380f93f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"54e3f9bd-cf5f-4361-81b2-78571380f93f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 11:01:27 crc kubenswrapper[4593]: E0129 11:01:27.179765 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:27.679749884 +0000 UTC m=+153.552784075 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.191100 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.197804 4593 patch_prober.go:28] interesting pod/console-f9d7485db-8425v container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.197879 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-8425v" podUID="ccb12507-4eef-467d-885d-982c68807bda" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.244812 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54e3f9bd-cf5f-4361-81b2-78571380f93f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"54e3f9bd-cf5f-4361-81b2-78571380f93f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.280426 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:27 crc kubenswrapper[4593]: E0129 11:01:27.282122 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:27.782106157 +0000 UTC m=+153.655140348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.332023 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.340595 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-t7wn4" podStartSLOduration=131.340577031 podStartE2EDuration="2m11.340577031s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:27.154846739 +0000 UTC m=+153.027880930" watchObservedRunningTime="2026-01-29 11:01:27.340577031 +0000 UTC m=+153.213611222" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.359898 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:27 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:27 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:27 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.359940 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.384356 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:27 crc kubenswrapper[4593]: E0129 11:01:27.384651 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:27.884623144 +0000 UTC m=+153.757657325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.490519 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:27 crc kubenswrapper[4593]: E0129 11:01:27.492992 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:27.992952342 +0000 UTC m=+153.865986543 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.516288 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.531352 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.531401 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.532563 4593 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-djdmx container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.15:8443/livez\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.532794 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" podUID="f0ee22f5-d5c3-4686-ab5d-53223d05bef6" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.15:8443/livez\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.548728 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-rnn8b" podStartSLOduration=131.548708622 podStartE2EDuration="2m11.548708622s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:27.449733924 +0000 UTC m=+153.322768135" watchObservedRunningTime="2026-01-29 11:01:27.548708622 +0000 UTC m=+153.421742813" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.592490 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:27 crc kubenswrapper[4593]: E0129 11:01:27.593142 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:28.093130174 +0000 UTC m=+153.966164355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.695425 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:27 crc kubenswrapper[4593]: E0129 11:01:27.695811 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:28.195791214 +0000 UTC m=+154.068825405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.911875 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.911931 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.912015 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.912035 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.912407 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:27 crc kubenswrapper[4593]: E0129 11:01:27.912763 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:28.412742241 +0000 UTC m=+154.285776532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.964587 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.976455 4593 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-zpjgp container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.976509 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" podUID="59084a0c-807b-47c9-b905-6e07817bcb89" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.976595 4593 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-zpjgp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.976615 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" podUID="59084a0c-807b-47c9-b905-6e07817bcb89" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.994053 4593 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hw52m container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.994115 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" podUID="0aa74baf-fde3-4dad-aef0-7b8b1ae90098" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.994442 4593 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hw52m container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.994490 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" podUID="0aa74baf-fde3-4dad-aef0-7b8b1ae90098" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.164586 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:28 crc kubenswrapper[4593]: E0129 11:01:28.164754 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:28.664729357 +0000 UTC m=+154.537763548 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.164908 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:28 crc kubenswrapper[4593]: E0129 11:01:28.165282 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:28.665266931 +0000 UTC m=+154.538301122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.181151 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zv27c" event={"ID":"e9136490-ddbf-4318-91c6-e73d74e7b599","Type":"ContainerStarted","Data":"ad3f849f3006828d0a15e797bdea7fed3078f0652a5bc01a59b83a6d0ee24a6d"} Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.188038 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d1c6dffceda9bbdd2912bf97b95c997f77c990bbd0911e7d7180592727745739"} Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.188106 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ba040d1f8f92a8dc180bbd9b343662b333e547072f578c81646bb33e7c310983"} Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.188331 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.192677 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b3c84013b146db0c242e89fe2706b26110f225b6ef2d4f806c94e09a8861298e"} Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.375343 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:28 crc kubenswrapper[4593]: E0129 11:01:28.376017 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:28.875956843 +0000 UTC m=+154.748991034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.443136 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:28 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:28 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:28 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.443571 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.476736 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:28 crc kubenswrapper[4593]: E0129 11:01:28.483170 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:28.98315215 +0000 UTC m=+154.856186341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.492031 4593 patch_prober.go:28] interesting pod/console-operator-58897d9998-fm7cc container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.492121 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fm7cc" podUID="661d5765-a5d7-4cb4-87b9-284f36dc385e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.615601 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:28 crc kubenswrapper[4593]: E0129 11:01:28.615850 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:29.11582081 +0000 UTC m=+154.988855001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.616146 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:28 crc kubenswrapper[4593]: E0129 11:01:28.616415 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:29.116402367 +0000 UTC m=+154.989436558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.682574 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" podStartSLOduration=132.682554776 podStartE2EDuration="2m12.682554776s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:28.67268256 +0000 UTC m=+154.545716761" watchObservedRunningTime="2026-01-29 11:01:28.682554776 +0000 UTC m=+154.555588967" Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.811958 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:28 crc kubenswrapper[4593]: E0129 11:01:28.812308 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:29.312291953 +0000 UTC m=+155.185326144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.917248 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:28 crc kubenswrapper[4593]: E0129 11:01:28.917594 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:29.417582478 +0000 UTC m=+155.290616669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:29 crc kubenswrapper[4593]: I0129 11:01:29.039645 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:29 crc kubenswrapper[4593]: E0129 11:01:29.040015 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:29.539999311 +0000 UTC m=+155.413033502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:29 crc kubenswrapper[4593]: I0129 11:01:29.141025 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:29 crc kubenswrapper[4593]: E0129 11:01:29.141375 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:29.641359174 +0000 UTC m=+155.514393365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:29 crc kubenswrapper[4593]: I0129 11:01:29.241796 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:29 crc kubenswrapper[4593]: E0129 11:01:29.242298 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:29.742281557 +0000 UTC m=+155.615315748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:29 crc kubenswrapper[4593]: I0129 11:01:29.268179 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zv27c" event={"ID":"e9136490-ddbf-4318-91c6-e73d74e7b599","Type":"ContainerStarted","Data":"b4c4530ccf25a0bf81f49c7a364bffb6ef5c4571a43866b2820656d70677c2ae"} Jan 29 11:01:29 crc kubenswrapper[4593]: I0129 11:01:29.270303 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"518852f3ae67c727bf2c9699bb1ebbd7a5343979c6a650ae839125f6f5a77375"} Jan 29 11:01:29 crc kubenswrapper[4593]: I0129 11:01:29.320551 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:29 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:29 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:29 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:29 crc kubenswrapper[4593]: I0129 11:01:29.320661 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:29 crc kubenswrapper[4593]: I0129 11:01:29.379218 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:29 crc kubenswrapper[4593]: E0129 11:01:29.379695 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:29.879676079 +0000 UTC m=+155.752710270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:29 crc kubenswrapper[4593]: I0129 11:01:29.488414 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:29 crc kubenswrapper[4593]: E0129 11:01:29.490039 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:29.990002813 +0000 UTC m=+155.863037004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:29 crc kubenswrapper[4593]: I0129 11:01:29.590381 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:29 crc kubenswrapper[4593]: E0129 11:01:29.590946 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:30.090933026 +0000 UTC m=+155.963967217 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:29 crc kubenswrapper[4593]: I0129 11:01:29.698068 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:29 crc kubenswrapper[4593]: E0129 11:01:29.698487 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:30.198471283 +0000 UTC m=+156.071505464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:29 crc kubenswrapper[4593]: I0129 11:01:29.856251 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:29 crc kubenswrapper[4593]: E0129 11:01:29.856930 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:30.356914834 +0000 UTC m=+156.229949025 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.063373 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:30 crc kubenswrapper[4593]: E0129 11:01:30.063693 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:30.563669334 +0000 UTC m=+156.436703525 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.256410 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:30 crc kubenswrapper[4593]: E0129 11:01:30.256715 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:30.756703982 +0000 UTC m=+156.629738173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.315628 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:30 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:30 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:30 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.315986 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.337175 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.469368 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-config-volume\") pod \"eef5dc1f-d576-46dd-9de7-2a63c6d4157f\" (UID: \"eef5dc1f-d576-46dd-9de7-2a63c6d4157f\") " Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.469808 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:30 crc kubenswrapper[4593]: E0129 11:01:30.469818 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:30.969782519 +0000 UTC m=+156.842816710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.469888 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-secret-volume\") pod \"eef5dc1f-d576-46dd-9de7-2a63c6d4157f\" (UID: \"eef5dc1f-d576-46dd-9de7-2a63c6d4157f\") " Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.469913 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95lmg\" (UniqueName: \"kubernetes.io/projected/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-kube-api-access-95lmg\") pod \"eef5dc1f-d576-46dd-9de7-2a63c6d4157f\" (UID: \"eef5dc1f-d576-46dd-9de7-2a63c6d4157f\") " Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.470081 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.470169 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-config-volume" (OuterVolumeSpecName: "config-volume") pod "eef5dc1f-d576-46dd-9de7-2a63c6d4157f" (UID: "eef5dc1f-d576-46dd-9de7-2a63c6d4157f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:01:30 crc kubenswrapper[4593]: E0129 11:01:30.470489 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:30.970473619 +0000 UTC m=+156.843507820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.484043 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"005a7ceadef3d52b7889d079a191cf32cd310968eb816c46c1e7caa730904d30"} Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.484095 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9637a3e2b6d22746d4b44f195443a4359ebe4cf5b08dd5c909a9789fef96f476"} Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.493509 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.493667 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" event={"ID":"eef5dc1f-d576-46dd-9de7-2a63c6d4157f","Type":"ContainerDied","Data":"6e9375090f14ff59ea759c16737f4727f94c4e541ab0f6f5ae3c71787d1187c5"} Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.493696 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e9375090f14ff59ea759c16737f4727f94c4e541ab0f6f5ae3c71787d1187c5" Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.531681 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "eef5dc1f-d576-46dd-9de7-2a63c6d4157f" (UID: "eef5dc1f-d576-46dd-9de7-2a63c6d4157f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.532157 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-kube-api-access-95lmg" (OuterVolumeSpecName: "kube-api-access-95lmg") pod "eef5dc1f-d576-46dd-9de7-2a63c6d4157f" (UID: "eef5dc1f-d576-46dd-9de7-2a63c6d4157f"). InnerVolumeSpecName "kube-api-access-95lmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.573213 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.573494 4593 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.573514 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95lmg\" (UniqueName: \"kubernetes.io/projected/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-kube-api-access-95lmg\") on node \"crc\" DevicePath \"\"" Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.573529 4593 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 11:01:30 crc kubenswrapper[4593]: E0129 11:01:30.573630 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:31.073614942 +0000 UTC m=+156.946649133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.733167 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:30 crc kubenswrapper[4593]: E0129 11:01:30.734340 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:31.234329196 +0000 UTC m=+157.107363387 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.836276 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:30 crc kubenswrapper[4593]: E0129 11:01:30.836606 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:31.336587746 +0000 UTC m=+157.209621937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.946355 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:30 crc kubenswrapper[4593]: E0129 11:01:30.946702 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:31.446689165 +0000 UTC m=+157.319723356 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.992038 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.999267 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.046056 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.046960 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:31 crc kubenswrapper[4593]: E0129 11:01:31.047128 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:31.547106943 +0000 UTC m=+157.420141134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.047487 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:31 crc kubenswrapper[4593]: E0129 11:01:31.047784 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:31.547776711 +0000 UTC m=+157.420810902 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.226857 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:31 crc kubenswrapper[4593]: E0129 11:01:31.227667 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:31.727651351 +0000 UTC m=+157.600685532 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.324544 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:31 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:31 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:31 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.324667 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.330246 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:31 crc kubenswrapper[4593]: E0129 11:01:31.330665 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:31.830653621 +0000 UTC m=+157.703687812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.462990 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:31 crc kubenswrapper[4593]: E0129 11:01:31.463256 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:31.963240258 +0000 UTC m=+157.836274449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.466861 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qdz2v"] Jan 29 11:01:31 crc kubenswrapper[4593]: E0129 11:01:31.467046 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eef5dc1f-d576-46dd-9de7-2a63c6d4157f" containerName="collect-profiles" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.467062 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="eef5dc1f-d576-46dd-9de7-2a63c6d4157f" containerName="collect-profiles" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.467169 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="eef5dc1f-d576-46dd-9de7-2a63c6d4157f" containerName="collect-profiles" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.467868 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qdz2v" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.508393 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"54e3f9bd-cf5f-4361-81b2-78571380f93f","Type":"ContainerStarted","Data":"412596aea7ed79508efc55009f025bcad32104c84207c15c5c4be80493ef4961"} Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.511085 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zv27c" event={"ID":"e9136490-ddbf-4318-91c6-e73d74e7b599","Type":"ContainerStarted","Data":"9e13240e319463e4bf3d8598ae9956ab8cee414615315afe26dd555048869166"} Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.511254 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.525039 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qdz2v"] Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.563880 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:31 crc kubenswrapper[4593]: E0129 11:01:31.564492 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:32.064473029 +0000 UTC m=+157.937507240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.640096 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.640686 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.642666 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-zv27c" podStartSLOduration=17.642628405 podStartE2EDuration="17.642628405s" podCreationTimestamp="2026-01-29 11:01:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:31.635906456 +0000 UTC m=+157.508940647" watchObservedRunningTime="2026-01-29 11:01:31.642628405 +0000 UTC m=+157.515662596" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.648733 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.648835 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.665285 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.665562 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d47516f-05e5-4f96-bf5a-c4251af51b6b-utilities\") pod \"certified-operators-qdz2v\" (UID: \"3d47516f-05e5-4f96-bf5a-c4251af51b6b\") " pod="openshift-marketplace/certified-operators-qdz2v" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.665603 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j57m\" (UniqueName: \"kubernetes.io/projected/3d47516f-05e5-4f96-bf5a-c4251af51b6b-kube-api-access-7j57m\") pod \"certified-operators-qdz2v\" (UID: \"3d47516f-05e5-4f96-bf5a-c4251af51b6b\") " pod="openshift-marketplace/certified-operators-qdz2v" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.665660 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d47516f-05e5-4f96-bf5a-c4251af51b6b-catalog-content\") pod \"certified-operators-qdz2v\" (UID: \"3d47516f-05e5-4f96-bf5a-c4251af51b6b\") " pod="openshift-marketplace/certified-operators-qdz2v" Jan 29 11:01:31 crc kubenswrapper[4593]: E0129 11:01:31.666337 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:32.166313757 +0000 UTC m=+158.039347948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.767327 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lf9gr"] Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.767919 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d47516f-05e5-4f96-bf5a-c4251af51b6b-catalog-content\") pod \"certified-operators-qdz2v\" (UID: \"3d47516f-05e5-4f96-bf5a-c4251af51b6b\") " pod="openshift-marketplace/certified-operators-qdz2v" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.767468 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d47516f-05e5-4f96-bf5a-c4251af51b6b-catalog-content\") pod \"certified-operators-qdz2v\" (UID: \"3d47516f-05e5-4f96-bf5a-c4251af51b6b\") " pod="openshift-marketplace/certified-operators-qdz2v" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.768051 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.768094 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb142e67-1809-4b4f-91d6-1c745a85cb13-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"fb142e67-1809-4b4f-91d6-1c745a85cb13\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.768149 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d47516f-05e5-4f96-bf5a-c4251af51b6b-utilities\") pod \"certified-operators-qdz2v\" (UID: \"3d47516f-05e5-4f96-bf5a-c4251af51b6b\") " pod="openshift-marketplace/certified-operators-qdz2v" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.768170 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb142e67-1809-4b4f-91d6-1c745a85cb13-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"fb142e67-1809-4b4f-91d6-1c745a85cb13\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.768191 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j57m\" (UniqueName: \"kubernetes.io/projected/3d47516f-05e5-4f96-bf5a-c4251af51b6b-kube-api-access-7j57m\") pod \"certified-operators-qdz2v\" (UID: \"3d47516f-05e5-4f96-bf5a-c4251af51b6b\") " pod="openshift-marketplace/certified-operators-qdz2v" Jan 29 11:01:31 crc kubenswrapper[4593]: E0129 11:01:31.768722 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:32.26871146 +0000 UTC m=+158.141745651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.769042 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d47516f-05e5-4f96-bf5a-c4251af51b6b-utilities\") pod \"certified-operators-qdz2v\" (UID: \"3d47516f-05e5-4f96-bf5a-c4251af51b6b\") " pod="openshift-marketplace/certified-operators-qdz2v" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.772881 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lf9gr" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.807943 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lf9gr"] Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.816552 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j57m\" (UniqueName: \"kubernetes.io/projected/3d47516f-05e5-4f96-bf5a-c4251af51b6b-kube-api-access-7j57m\") pod \"certified-operators-qdz2v\" (UID: \"3d47516f-05e5-4f96-bf5a-c4251af51b6b\") " pod="openshift-marketplace/certified-operators-qdz2v" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.851540 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fgg5s"] Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.852742 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fgg5s" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.868978 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.869264 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb142e67-1809-4b4f-91d6-1c745a85cb13-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"fb142e67-1809-4b4f-91d6-1c745a85cb13\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 11:01:31 crc kubenswrapper[4593]: E0129 11:01:31.869304 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:32.369278352 +0000 UTC m=+158.242312573 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.869351 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spqr2\" (UniqueName: \"kubernetes.io/projected/9c000e16-ab7a-4247-99da-74ea62d94b89-kube-api-access-spqr2\") pod \"community-operators-lf9gr\" (UID: \"9c000e16-ab7a-4247-99da-74ea62d94b89\") " pod="openshift-marketplace/community-operators-lf9gr" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.869372 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb142e67-1809-4b4f-91d6-1c745a85cb13-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"fb142e67-1809-4b4f-91d6-1c745a85cb13\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.869384 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/695d677a-4519-4ff0-9c6a-cbc902b00ee5-utilities\") pod \"certified-operators-fgg5s\" (UID: \"695d677a-4519-4ff0-9c6a-cbc902b00ee5\") " pod="openshift-marketplace/certified-operators-fgg5s" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.869498 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.869582 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/695d677a-4519-4ff0-9c6a-cbc902b00ee5-catalog-content\") pod \"certified-operators-fgg5s\" (UID: \"695d677a-4519-4ff0-9c6a-cbc902b00ee5\") " pod="openshift-marketplace/certified-operators-fgg5s" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.869658 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb142e67-1809-4b4f-91d6-1c745a85cb13-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"fb142e67-1809-4b4f-91d6-1c745a85cb13\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.869684 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c000e16-ab7a-4247-99da-74ea62d94b89-catalog-content\") pod \"community-operators-lf9gr\" (UID: \"9c000e16-ab7a-4247-99da-74ea62d94b89\") " pod="openshift-marketplace/community-operators-lf9gr" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.869731 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c000e16-ab7a-4247-99da-74ea62d94b89-utilities\") pod \"community-operators-lf9gr\" (UID: \"9c000e16-ab7a-4247-99da-74ea62d94b89\") " pod="openshift-marketplace/community-operators-lf9gr" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.869783 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7wxk\" (UniqueName: \"kubernetes.io/projected/695d677a-4519-4ff0-9c6a-cbc902b00ee5-kube-api-access-t7wxk\") pod \"certified-operators-fgg5s\" (UID: \"695d677a-4519-4ff0-9c6a-cbc902b00ee5\") " pod="openshift-marketplace/certified-operators-fgg5s" Jan 29 11:01:31 crc kubenswrapper[4593]: E0129 11:01:31.870316 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:32.37030636 +0000 UTC m=+158.243340571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.931755 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fgg5s"] Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.953145 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.964012 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.970487 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:31 crc kubenswrapper[4593]: E0129 11:01:31.970690 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:32.470671387 +0000 UTC m=+158.343705578 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.970727 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spqr2\" (UniqueName: \"kubernetes.io/projected/9c000e16-ab7a-4247-99da-74ea62d94b89-kube-api-access-spqr2\") pod \"community-operators-lf9gr\" (UID: \"9c000e16-ab7a-4247-99da-74ea62d94b89\") " pod="openshift-marketplace/community-operators-lf9gr" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.970752 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/695d677a-4519-4ff0-9c6a-cbc902b00ee5-utilities\") pod \"certified-operators-fgg5s\" (UID: \"695d677a-4519-4ff0-9c6a-cbc902b00ee5\") " pod="openshift-marketplace/certified-operators-fgg5s" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.970798 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.970835 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/695d677a-4519-4ff0-9c6a-cbc902b00ee5-catalog-content\") pod \"certified-operators-fgg5s\" (UID: \"695d677a-4519-4ff0-9c6a-cbc902b00ee5\") " pod="openshift-marketplace/certified-operators-fgg5s" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.970865 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c000e16-ab7a-4247-99da-74ea62d94b89-catalog-content\") pod \"community-operators-lf9gr\" (UID: \"9c000e16-ab7a-4247-99da-74ea62d94b89\") " pod="openshift-marketplace/community-operators-lf9gr" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.970890 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c000e16-ab7a-4247-99da-74ea62d94b89-utilities\") pod \"community-operators-lf9gr\" (UID: \"9c000e16-ab7a-4247-99da-74ea62d94b89\") " pod="openshift-marketplace/community-operators-lf9gr" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.970929 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7wxk\" (UniqueName: \"kubernetes.io/projected/695d677a-4519-4ff0-9c6a-cbc902b00ee5-kube-api-access-t7wxk\") pod \"certified-operators-fgg5s\" (UID: \"695d677a-4519-4ff0-9c6a-cbc902b00ee5\") " pod="openshift-marketplace/certified-operators-fgg5s" Jan 29 11:01:31 crc kubenswrapper[4593]: E0129 11:01:31.971086 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:32.471075548 +0000 UTC m=+158.344109739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.971258 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/695d677a-4519-4ff0-9c6a-cbc902b00ee5-catalog-content\") pod \"certified-operators-fgg5s\" (UID: \"695d677a-4519-4ff0-9c6a-cbc902b00ee5\") " pod="openshift-marketplace/certified-operators-fgg5s" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.971391 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/695d677a-4519-4ff0-9c6a-cbc902b00ee5-utilities\") pod \"certified-operators-fgg5s\" (UID: \"695d677a-4519-4ff0-9c6a-cbc902b00ee5\") " pod="openshift-marketplace/certified-operators-fgg5s" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.971513 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c000e16-ab7a-4247-99da-74ea62d94b89-catalog-content\") pod \"community-operators-lf9gr\" (UID: \"9c000e16-ab7a-4247-99da-74ea62d94b89\") " pod="openshift-marketplace/community-operators-lf9gr" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.971757 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c000e16-ab7a-4247-99da-74ea62d94b89-utilities\") pod \"community-operators-lf9gr\" (UID: \"9c000e16-ab7a-4247-99da-74ea62d94b89\") " pod="openshift-marketplace/community-operators-lf9gr" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.075042 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb142e67-1809-4b4f-91d6-1c745a85cb13-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"fb142e67-1809-4b4f-91d6-1c745a85cb13\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.083311 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:32 crc kubenswrapper[4593]: E0129 11:01:32.084642 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:32.584598323 +0000 UTC m=+158.457632514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.085083 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:32 crc kubenswrapper[4593]: E0129 11:01:32.085510 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:32.585493898 +0000 UTC m=+158.458528089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.118948 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qdz2v" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.136868 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7wxk\" (UniqueName: \"kubernetes.io/projected/695d677a-4519-4ff0-9c6a-cbc902b00ee5-kube-api-access-t7wxk\") pod \"certified-operators-fgg5s\" (UID: \"695d677a-4519-4ff0-9c6a-cbc902b00ee5\") " pod="openshift-marketplace/certified-operators-fgg5s" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.162555 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spqr2\" (UniqueName: \"kubernetes.io/projected/9c000e16-ab7a-4247-99da-74ea62d94b89-kube-api-access-spqr2\") pod \"community-operators-lf9gr\" (UID: \"9c000e16-ab7a-4247-99da-74ea62d94b89\") " pod="openshift-marketplace/community-operators-lf9gr" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.165565 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w7gmb"] Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.173347 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fgg5s" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.176619 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w7gmb" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.187223 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.187361 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da7a9394-5c19-4a9e-9c6d-652b3ce08477-catalog-content\") pod \"community-operators-w7gmb\" (UID: \"da7a9394-5c19-4a9e-9c6d-652b3ce08477\") " pod="openshift-marketplace/community-operators-w7gmb" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.187386 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwmr4\" (UniqueName: \"kubernetes.io/projected/da7a9394-5c19-4a9e-9c6d-652b3ce08477-kube-api-access-mwmr4\") pod \"community-operators-w7gmb\" (UID: \"da7a9394-5c19-4a9e-9c6d-652b3ce08477\") " pod="openshift-marketplace/community-operators-w7gmb" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.187478 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da7a9394-5c19-4a9e-9c6d-652b3ce08477-utilities\") pod \"community-operators-w7gmb\" (UID: \"da7a9394-5c19-4a9e-9c6d-652b3ce08477\") " pod="openshift-marketplace/community-operators-w7gmb" Jan 29 11:01:32 crc kubenswrapper[4593]: E0129 11:01:32.187580 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:32.687564182 +0000 UTC m=+158.560598373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.211322 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w7gmb"] Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.334911 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.335746 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da7a9394-5c19-4a9e-9c6d-652b3ce08477-catalog-content\") pod \"community-operators-w7gmb\" (UID: \"da7a9394-5c19-4a9e-9c6d-652b3ce08477\") " pod="openshift-marketplace/community-operators-w7gmb" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.335777 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwmr4\" (UniqueName: \"kubernetes.io/projected/da7a9394-5c19-4a9e-9c6d-652b3ce08477-kube-api-access-mwmr4\") pod \"community-operators-w7gmb\" (UID: \"da7a9394-5c19-4a9e-9c6d-652b3ce08477\") " pod="openshift-marketplace/community-operators-w7gmb" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.335867 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.335903 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da7a9394-5c19-4a9e-9c6d-652b3ce08477-utilities\") pod \"community-operators-w7gmb\" (UID: \"da7a9394-5c19-4a9e-9c6d-652b3ce08477\") " pod="openshift-marketplace/community-operators-w7gmb" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.336347 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da7a9394-5c19-4a9e-9c6d-652b3ce08477-utilities\") pod \"community-operators-w7gmb\" (UID: \"da7a9394-5c19-4a9e-9c6d-652b3ce08477\") " pod="openshift-marketplace/community-operators-w7gmb" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.336398 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da7a9394-5c19-4a9e-9c6d-652b3ce08477-catalog-content\") pod \"community-operators-w7gmb\" (UID: \"da7a9394-5c19-4a9e-9c6d-652b3ce08477\") " pod="openshift-marketplace/community-operators-w7gmb" Jan 29 11:01:32 crc kubenswrapper[4593]: E0129 11:01:32.336730 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:32.836714283 +0000 UTC m=+158.709748534 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.341899 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:32 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:32 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:32 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.341964 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.424947 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lf9gr" Jan 29 11:01:32 crc kubenswrapper[4593]: E0129 11:01:32.436577 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:32.936562305 +0000 UTC m=+158.809596496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.436500 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.436905 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:32 crc kubenswrapper[4593]: E0129 11:01:32.437159 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:32.93713601 +0000 UTC m=+158.810170201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.490821 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwmr4\" (UniqueName: \"kubernetes.io/projected/da7a9394-5c19-4a9e-9c6d-652b3ce08477-kube-api-access-mwmr4\") pod \"community-operators-w7gmb\" (UID: \"da7a9394-5c19-4a9e-9c6d-652b3ce08477\") " pod="openshift-marketplace/community-operators-w7gmb" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.571384 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w7gmb" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.572161 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:32 crc kubenswrapper[4593]: E0129 11:01:32.572462 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:33.072448144 +0000 UTC m=+158.945482335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.581990 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.633691 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.765870 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"54e3f9bd-cf5f-4361-81b2-78571380f93f","Type":"ContainerStarted","Data":"15afaa0d2878e6c1cc1e59308afdc3dd8e09e8f7b2a5941c77353c3358c20af0"} Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.767613 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-29j27" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.769924 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:32 crc kubenswrapper[4593]: E0129 11:01:32.772108 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:33.271849629 +0000 UTC m=+159.144883820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.932745 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:32 crc kubenswrapper[4593]: E0129 11:01:32.933712 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:33.433695905 +0000 UTC m=+159.306730096 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.065182 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:33 crc kubenswrapper[4593]: E0129 11:01:33.065473 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:33.565461599 +0000 UTC m=+159.438495790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.177818 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:33 crc kubenswrapper[4593]: E0129 11:01:33.179197 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:33.679181588 +0000 UTC m=+159.552215779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.301392 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:33 crc kubenswrapper[4593]: E0129 11:01:33.301732 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:33.801720585 +0000 UTC m=+159.674754776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.318858 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:33 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:33 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:33 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.318908 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.423298 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=7.423278474 podStartE2EDuration="7.423278474s" podCreationTimestamp="2026-01-29 11:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:33.010167633 +0000 UTC m=+158.883201824" watchObservedRunningTime="2026-01-29 11:01:33.423278474 +0000 UTC m=+159.296312665" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.425539 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tvwft"] Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.426798 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tvwft" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.427693 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:33 crc kubenswrapper[4593]: E0129 11:01:33.428222 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:33.928204321 +0000 UTC m=+159.801238512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.448969 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvwft"] Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.493982 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.528780 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.528829 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ce733ca-85e0-43f9-a444-9703d600da63-utilities\") pod \"redhat-marketplace-tvwft\" (UID: \"6ce733ca-85e0-43f9-a444-9703d600da63\") " pod="openshift-marketplace/redhat-marketplace-tvwft" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.528862 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ce733ca-85e0-43f9-a444-9703d600da63-catalog-content\") pod \"redhat-marketplace-tvwft\" (UID: \"6ce733ca-85e0-43f9-a444-9703d600da63\") " pod="openshift-marketplace/redhat-marketplace-tvwft" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.528898 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5bhb\" (UniqueName: \"kubernetes.io/projected/6ce733ca-85e0-43f9-a444-9703d600da63-kube-api-access-p5bhb\") pod \"redhat-marketplace-tvwft\" (UID: \"6ce733ca-85e0-43f9-a444-9703d600da63\") " pod="openshift-marketplace/redhat-marketplace-tvwft" Jan 29 11:01:33 crc kubenswrapper[4593]: E0129 11:01:33.529441 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:34.029427462 +0000 UTC m=+159.902461653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.664745 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.665098 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ce733ca-85e0-43f9-a444-9703d600da63-utilities\") pod \"redhat-marketplace-tvwft\" (UID: \"6ce733ca-85e0-43f9-a444-9703d600da63\") " pod="openshift-marketplace/redhat-marketplace-tvwft" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.665153 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ce733ca-85e0-43f9-a444-9703d600da63-catalog-content\") pod \"redhat-marketplace-tvwft\" (UID: \"6ce733ca-85e0-43f9-a444-9703d600da63\") " pod="openshift-marketplace/redhat-marketplace-tvwft" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.665210 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5bhb\" (UniqueName: \"kubernetes.io/projected/6ce733ca-85e0-43f9-a444-9703d600da63-kube-api-access-p5bhb\") pod \"redhat-marketplace-tvwft\" (UID: \"6ce733ca-85e0-43f9-a444-9703d600da63\") " pod="openshift-marketplace/redhat-marketplace-tvwft" Jan 29 11:01:33 crc kubenswrapper[4593]: E0129 11:01:33.666045 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:34.16597417 +0000 UTC m=+160.039008361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.669324 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ce733ca-85e0-43f9-a444-9703d600da63-utilities\") pod \"redhat-marketplace-tvwft\" (UID: \"6ce733ca-85e0-43f9-a444-9703d600da63\") " pod="openshift-marketplace/redhat-marketplace-tvwft" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.670448 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ce733ca-85e0-43f9-a444-9703d600da63-catalog-content\") pod \"redhat-marketplace-tvwft\" (UID: \"6ce733ca-85e0-43f9-a444-9703d600da63\") " pod="openshift-marketplace/redhat-marketplace-tvwft" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.726962 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5bhb\" (UniqueName: \"kubernetes.io/projected/6ce733ca-85e0-43f9-a444-9703d600da63-kube-api-access-p5bhb\") pod \"redhat-marketplace-tvwft\" (UID: \"6ce733ca-85e0-43f9-a444-9703d600da63\") " pod="openshift-marketplace/redhat-marketplace-tvwft" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.825439 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:33 crc kubenswrapper[4593]: E0129 11:01:33.826100 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:34.326079757 +0000 UTC m=+160.199113948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.837962 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tvwft" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.900178 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-69z82"] Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.901152 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-69z82" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.914103 4593 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.928219 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.928414 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e424f176-80e8-4029-a500-097e1d9e5b1e-catalog-content\") pod \"redhat-marketplace-69z82\" (UID: \"e424f176-80e8-4029-a500-097e1d9e5b1e\") " pod="openshift-marketplace/redhat-marketplace-69z82" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.928495 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc45l\" (UniqueName: \"kubernetes.io/projected/e424f176-80e8-4029-a500-097e1d9e5b1e-kube-api-access-cc45l\") pod \"redhat-marketplace-69z82\" (UID: \"e424f176-80e8-4029-a500-097e1d9e5b1e\") " pod="openshift-marketplace/redhat-marketplace-69z82" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.928604 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e424f176-80e8-4029-a500-097e1d9e5b1e-utilities\") pod \"redhat-marketplace-69z82\" (UID: \"e424f176-80e8-4029-a500-097e1d9e5b1e\") " pod="openshift-marketplace/redhat-marketplace-69z82" Jan 29 11:01:33 crc kubenswrapper[4593]: E0129 11:01:33.929454 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:34.429435627 +0000 UTC m=+160.302469818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.028443 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.028521 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.029325 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e424f176-80e8-4029-a500-097e1d9e5b1e-catalog-content\") pod \"redhat-marketplace-69z82\" (UID: \"e424f176-80e8-4029-a500-097e1d9e5b1e\") " pod="openshift-marketplace/redhat-marketplace-69z82" Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.029379 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc45l\" (UniqueName: \"kubernetes.io/projected/e424f176-80e8-4029-a500-097e1d9e5b1e-kube-api-access-cc45l\") pod \"redhat-marketplace-69z82\" (UID: \"e424f176-80e8-4029-a500-097e1d9e5b1e\") " pod="openshift-marketplace/redhat-marketplace-69z82" Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.029450 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e424f176-80e8-4029-a500-097e1d9e5b1e-utilities\") pod \"redhat-marketplace-69z82\" (UID: \"e424f176-80e8-4029-a500-097e1d9e5b1e\") " pod="openshift-marketplace/redhat-marketplace-69z82" Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.029476 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:34 crc kubenswrapper[4593]: E0129 11:01:34.029758 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:34.529746912 +0000 UTC m=+160.402781103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.030394 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e424f176-80e8-4029-a500-097e1d9e5b1e-catalog-content\") pod \"redhat-marketplace-69z82\" (UID: \"e424f176-80e8-4029-a500-097e1d9e5b1e\") " pod="openshift-marketplace/redhat-marketplace-69z82" Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.030774 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e424f176-80e8-4029-a500-097e1d9e5b1e-utilities\") pod \"redhat-marketplace-69z82\" (UID: \"e424f176-80e8-4029-a500-097e1d9e5b1e\") " pod="openshift-marketplace/redhat-marketplace-69z82" Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.036390 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-69z82"] Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.130931 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:34 crc kubenswrapper[4593]: E0129 11:01:34.131582 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:34.631560029 +0000 UTC m=+160.504594220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.133288 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc45l\" (UniqueName: \"kubernetes.io/projected/e424f176-80e8-4029-a500-097e1d9e5b1e-kube-api-access-cc45l\") pod \"redhat-marketplace-69z82\" (UID: \"e424f176-80e8-4029-a500-097e1d9e5b1e\") " pod="openshift-marketplace/redhat-marketplace-69z82" Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.240988 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:34 crc kubenswrapper[4593]: E0129 11:01:34.285050 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:34.78502992 +0000 UTC m=+160.658064111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.347323 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:34 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:34 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:34 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.347373 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.347839 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-69z82" Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.372220 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:34 crc kubenswrapper[4593]: E0129 11:01:34.372686 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:34.872664881 +0000 UTC m=+160.745699072 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.474819 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:34 crc kubenswrapper[4593]: E0129 11:01:34.475111 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:34.975099025 +0000 UTC m=+160.848133216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.577792 4593 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-29T11:01:33.914131679Z","Handler":null,"Name":""} Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.587102 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:34 crc kubenswrapper[4593]: E0129 11:01:34.587460 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:35.087443016 +0000 UTC m=+160.960477207 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.741185 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:34 crc kubenswrapper[4593]: E0129 11:01:34.741510 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:35.241499993 +0000 UTC m=+161.114534184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:34 crc kubenswrapper[4593]: W0129 11:01:34.775671 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d47516f_05e5_4f96_bf5a_c4251af51b6b.slice/crio-96ef38f406756da164944fbca4b3b1aac366663320c1359747791a21ca1ed585 WatchSource:0}: Error finding container 96ef38f406756da164944fbca4b3b1aac366663320c1359747791a21ca1ed585: Status 404 returned error can't find the container with id 96ef38f406756da164944fbca4b3b1aac366663320c1359747791a21ca1ed585 Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.814836 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qdz2v"] Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.820879 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tm7d7"] Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.822268 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tm7d7" Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.829924 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.838734 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fgg5s"] Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.842114 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.842523 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ba9e41c-b01a-4d45-9272-24aca728f7bc-utilities\") pod \"redhat-operators-tm7d7\" (UID: \"7ba9e41c-b01a-4d45-9272-24aca728f7bc\") " pod="openshift-marketplace/redhat-operators-tm7d7" Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.842550 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ba9e41c-b01a-4d45-9272-24aca728f7bc-catalog-content\") pod \"redhat-operators-tm7d7\" (UID: \"7ba9e41c-b01a-4d45-9272-24aca728f7bc\") " pod="openshift-marketplace/redhat-operators-tm7d7" Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.842612 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-879j2\" (UniqueName: \"kubernetes.io/projected/7ba9e41c-b01a-4d45-9272-24aca728f7bc-kube-api-access-879j2\") pod \"redhat-operators-tm7d7\" (UID: \"7ba9e41c-b01a-4d45-9272-24aca728f7bc\") " pod="openshift-marketplace/redhat-operators-tm7d7" Jan 29 11:01:34 crc kubenswrapper[4593]: E0129 11:01:34.917760 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:35.417733031 +0000 UTC m=+161.290767222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.917925 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tm7d7"] Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.970023 4593 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.970049 4593 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.036967 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.037153 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ba9e41c-b01a-4d45-9272-24aca728f7bc-utilities\") pod \"redhat-operators-tm7d7\" (UID: \"7ba9e41c-b01a-4d45-9272-24aca728f7bc\") " pod="openshift-marketplace/redhat-operators-tm7d7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.037186 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ba9e41c-b01a-4d45-9272-24aca728f7bc-catalog-content\") pod \"redhat-operators-tm7d7\" (UID: \"7ba9e41c-b01a-4d45-9272-24aca728f7bc\") " pod="openshift-marketplace/redhat-operators-tm7d7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.037312 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-879j2\" (UniqueName: \"kubernetes.io/projected/7ba9e41c-b01a-4d45-9272-24aca728f7bc-kube-api-access-879j2\") pod \"redhat-operators-tm7d7\" (UID: \"7ba9e41c-b01a-4d45-9272-24aca728f7bc\") " pod="openshift-marketplace/redhat-operators-tm7d7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.046029 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ba9e41c-b01a-4d45-9272-24aca728f7bc-utilities\") pod \"redhat-operators-tm7d7\" (UID: \"7ba9e41c-b01a-4d45-9272-24aca728f7bc\") " pod="openshift-marketplace/redhat-operators-tm7d7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.046340 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ba9e41c-b01a-4d45-9272-24aca728f7bc-catalog-content\") pod \"redhat-operators-tm7d7\" (UID: \"7ba9e41c-b01a-4d45-9272-24aca728f7bc\") " pod="openshift-marketplace/redhat-operators-tm7d7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.178990 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-879j2\" (UniqueName: \"kubernetes.io/projected/7ba9e41c-b01a-4d45-9272-24aca728f7bc-kube-api-access-879j2\") pod \"redhat-operators-tm7d7\" (UID: \"7ba9e41c-b01a-4d45-9272-24aca728f7bc\") " pod="openshift-marketplace/redhat-operators-tm7d7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.236746 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgg5s" event={"ID":"695d677a-4519-4ff0-9c6a-cbc902b00ee5","Type":"ContainerStarted","Data":"73c935e8b979b7dc8ab160b89b0aa92943613ba07d23ca3617474e48390b50f1"} Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.237087 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdz2v" event={"ID":"3d47516f-05e5-4f96-bf5a-c4251af51b6b","Type":"ContainerStarted","Data":"96ef38f406756da164944fbca4b3b1aac366663320c1359747791a21ca1ed585"} Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.239831 4593 generic.go:334] "Generic (PLEG): container finished" podID="54e3f9bd-cf5f-4361-81b2-78571380f93f" containerID="15afaa0d2878e6c1cc1e59308afdc3dd8e09e8f7b2a5941c77353c3358c20af0" exitCode=0 Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.239867 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"54e3f9bd-cf5f-4361-81b2-78571380f93f","Type":"ContainerDied","Data":"15afaa0d2878e6c1cc1e59308afdc3dd8e09e8f7b2a5941c77353c3358c20af0"} Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.265668 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lf9gr"] Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.265756 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cqhd7"] Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.267093 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cqhd7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.382725 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-utilities\") pod \"redhat-operators-cqhd7\" (UID: \"d3be8312-dfdd-4359-b8c8-d9b8158fdab4\") " pod="openshift-marketplace/redhat-operators-cqhd7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.382768 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-catalog-content\") pod \"redhat-operators-cqhd7\" (UID: \"d3be8312-dfdd-4359-b8c8-d9b8158fdab4\") " pod="openshift-marketplace/redhat-operators-cqhd7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.382811 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwkcz\" (UniqueName: \"kubernetes.io/projected/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-kube-api-access-vwkcz\") pod \"redhat-operators-cqhd7\" (UID: \"d3be8312-dfdd-4359-b8c8-d9b8158fdab4\") " pod="openshift-marketplace/redhat-operators-cqhd7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.388061 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:35 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:35 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:35 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.388120 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.394470 4593 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.394518 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.474061 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tm7d7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.491024 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-utilities\") pod \"redhat-operators-cqhd7\" (UID: \"d3be8312-dfdd-4359-b8c8-d9b8158fdab4\") " pod="openshift-marketplace/redhat-operators-cqhd7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.491082 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-catalog-content\") pod \"redhat-operators-cqhd7\" (UID: \"d3be8312-dfdd-4359-b8c8-d9b8158fdab4\") " pod="openshift-marketplace/redhat-operators-cqhd7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.491148 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwkcz\" (UniqueName: \"kubernetes.io/projected/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-kube-api-access-vwkcz\") pod \"redhat-operators-cqhd7\" (UID: \"d3be8312-dfdd-4359-b8c8-d9b8158fdab4\") " pod="openshift-marketplace/redhat-operators-cqhd7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.492597 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-utilities\") pod \"redhat-operators-cqhd7\" (UID: \"d3be8312-dfdd-4359-b8c8-d9b8158fdab4\") " pod="openshift-marketplace/redhat-operators-cqhd7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.493511 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-catalog-content\") pod \"redhat-operators-cqhd7\" (UID: \"d3be8312-dfdd-4359-b8c8-d9b8158fdab4\") " pod="openshift-marketplace/redhat-operators-cqhd7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.563528 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cqhd7"] Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.636210 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwkcz\" (UniqueName: \"kubernetes.io/projected/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-kube-api-access-vwkcz\") pod \"redhat-operators-cqhd7\" (UID: \"d3be8312-dfdd-4359-b8c8-d9b8158fdab4\") " pod="openshift-marketplace/redhat-operators-cqhd7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.767986 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cqhd7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.773576 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.946996 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.994429 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:36 crc kubenswrapper[4593]: I0129 11:01:36.145762 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:36 crc kubenswrapper[4593]: W0129 11:01:36.220182 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podfb142e67_1809_4b4f_91d6_1c745a85cb13.slice/crio-d087ad64cc04cfb6e08198781d86078cbfc1c528676b8d2ad6b759271367d41c WatchSource:0}: Error finding container d087ad64cc04cfb6e08198781d86078cbfc1c528676b8d2ad6b759271367d41c: Status 404 returned error can't find the container with id d087ad64cc04cfb6e08198781d86078cbfc1c528676b8d2ad6b759271367d41c Jan 29 11:01:36 crc kubenswrapper[4593]: I0129 11:01:36.259321 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 11:01:36 crc kubenswrapper[4593]: I0129 11:01:36.337685 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:36 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:36 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:36 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:36 crc kubenswrapper[4593]: I0129 11:01:36.337740 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:36 crc kubenswrapper[4593]: I0129 11:01:36.387661 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lf9gr" event={"ID":"9c000e16-ab7a-4247-99da-74ea62d94b89","Type":"ContainerStarted","Data":"e852468ceed93d241feec7b7965eaf616d41cdfd72c07bd89b3ac0aca81937b9"} Jan 29 11:01:36 crc kubenswrapper[4593]: I0129 11:01:36.397387 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvwft"] Jan 29 11:01:36 crc kubenswrapper[4593]: I0129 11:01:36.429306 4593 generic.go:334] "Generic (PLEG): container finished" podID="695d677a-4519-4ff0-9c6a-cbc902b00ee5" containerID="b9d5c7d4701eae15759c1c9b230bf47aaf13c122f4acea86bd71b0030082917d" exitCode=0 Jan 29 11:01:36 crc kubenswrapper[4593]: I0129 11:01:36.429880 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgg5s" event={"ID":"695d677a-4519-4ff0-9c6a-cbc902b00ee5","Type":"ContainerDied","Data":"b9d5c7d4701eae15759c1c9b230bf47aaf13c122f4acea86bd71b0030082917d"} Jan 29 11:01:36 crc kubenswrapper[4593]: I0129 11:01:36.436356 4593 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 11:01:36 crc kubenswrapper[4593]: I0129 11:01:36.473833 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fb142e67-1809-4b4f-91d6-1c745a85cb13","Type":"ContainerStarted","Data":"d087ad64cc04cfb6e08198781d86078cbfc1c528676b8d2ad6b759271367d41c"} Jan 29 11:01:36 crc kubenswrapper[4593]: I0129 11:01:36.495382 4593 generic.go:334] "Generic (PLEG): container finished" podID="3d47516f-05e5-4f96-bf5a-c4251af51b6b" containerID="45fd11091e4829626417cd96b671777720a463c182e9d6f349c55edbbe7126c6" exitCode=0 Jan 29 11:01:36 crc kubenswrapper[4593]: I0129 11:01:36.496651 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdz2v" event={"ID":"3d47516f-05e5-4f96-bf5a-c4251af51b6b","Type":"ContainerDied","Data":"45fd11091e4829626417cd96b671777720a463c182e9d6f349c55edbbe7126c6"} Jan 29 11:01:36 crc kubenswrapper[4593]: I0129 11:01:36.544428 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w7gmb"] Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.163490 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.164541 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-69z82"] Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.203064 4593 patch_prober.go:28] interesting pod/console-f9d7485db-8425v container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.203120 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-8425v" podUID="ccb12507-4eef-467d-885d-982c68807bda" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.355809 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:37 crc kubenswrapper[4593]: [+]has-synced ok Jan 29 11:01:37 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:37 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.356145 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.457938 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-fm7cc" Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.547464 4593 generic.go:334] "Generic (PLEG): container finished" podID="da7a9394-5c19-4a9e-9c6d-652b3ce08477" containerID="1bf75ace58181af9f0cccb28ad84d5dd8c16c8b69d21079288e4029c1048cd89" exitCode=0 Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.547515 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w7gmb" event={"ID":"da7a9394-5c19-4a9e-9c6d-652b3ce08477","Type":"ContainerDied","Data":"1bf75ace58181af9f0cccb28ad84d5dd8c16c8b69d21079288e4029c1048cd89"} Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.547539 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w7gmb" event={"ID":"da7a9394-5c19-4a9e-9c6d-652b3ce08477","Type":"ContainerStarted","Data":"72aa027856b0ef03a57066a814eb40eddf13ecfd2d1c62024902a4d79111cf83"} Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.573961 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-69z82" event={"ID":"e424f176-80e8-4029-a500-097e1d9e5b1e","Type":"ContainerStarted","Data":"eef621985e16727acc46b16908219680b25248fd848eacdfa61bcd853a7c18ac"} Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.583590 4593 generic.go:334] "Generic (PLEG): container finished" podID="9c000e16-ab7a-4247-99da-74ea62d94b89" containerID="8e093f0363d31a3b87d3f9991c3433e34b34cbb53e07ea1c58a964d993b8be1a" exitCode=0 Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.583687 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lf9gr" event={"ID":"9c000e16-ab7a-4247-99da-74ea62d94b89","Type":"ContainerDied","Data":"8e093f0363d31a3b87d3f9991c3433e34b34cbb53e07ea1c58a964d993b8be1a"} Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.764915 4593 generic.go:334] "Generic (PLEG): container finished" podID="6ce733ca-85e0-43f9-a444-9703d600da63" containerID="ee4825fff37e0ca04b8b8e3c87e01fed5f500f91478778493b455fcf75dfd5d6" exitCode=0 Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.764959 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvwft" event={"ID":"6ce733ca-85e0-43f9-a444-9703d600da63","Type":"ContainerDied","Data":"ee4825fff37e0ca04b8b8e3c87e01fed5f500f91478778493b455fcf75dfd5d6"} Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.764984 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvwft" event={"ID":"6ce733ca-85e0-43f9-a444-9703d600da63","Type":"ContainerStarted","Data":"5a2bdd7e5cb75db5cc0318b63cd7ca3e8135afeaf117d553a67933c149ec867e"} Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.778696 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.778712 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.778742 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.778763 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.009482 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.015213 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.037968 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.090753 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-g72zl"] Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.096685 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs\") pod \"network-metrics-daemon-7jm9m\" (UID: \"7d229804-724c-4e21-89ac-e3369b615389\") " pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.105398 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tm7d7"] Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.178359 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs\") pod \"network-metrics-daemon-7jm9m\" (UID: \"7d229804-724c-4e21-89ac-e3369b615389\") " pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.199987 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54e3f9bd-cf5f-4361-81b2-78571380f93f-kubelet-dir\") pod \"54e3f9bd-cf5f-4361-81b2-78571380f93f\" (UID: \"54e3f9bd-cf5f-4361-81b2-78571380f93f\") " Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.200044 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54e3f9bd-cf5f-4361-81b2-78571380f93f-kube-api-access\") pod \"54e3f9bd-cf5f-4361-81b2-78571380f93f\" (UID: \"54e3f9bd-cf5f-4361-81b2-78571380f93f\") " Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.201379 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54e3f9bd-cf5f-4361-81b2-78571380f93f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "54e3f9bd-cf5f-4361-81b2-78571380f93f" (UID: "54e3f9bd-cf5f-4361-81b2-78571380f93f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.203683 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54e3f9bd-cf5f-4361-81b2-78571380f93f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "54e3f9bd-cf5f-4361-81b2-78571380f93f" (UID: "54e3f9bd-cf5f-4361-81b2-78571380f93f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.299901 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cqhd7"] Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.331363 4593 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54e3f9bd-cf5f-4361-81b2-78571380f93f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.331400 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54e3f9bd-cf5f-4361-81b2-78571380f93f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.335655 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.349793 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.388452 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:01:38 crc kubenswrapper[4593]: W0129 11:01:38.416805 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3be8312_dfdd_4359_b8c8_d9b8158fdab4.slice/crio-e3ed61cb166abee85a5cafd4f482b1fd984051495892cd7e58f5727be894ede4 WatchSource:0}: Error finding container e3ed61cb166abee85a5cafd4f482b1fd984051495892cd7e58f5727be894ede4: Status 404 returned error can't find the container with id e3ed61cb166abee85a5cafd4f482b1fd984051495892cd7e58f5727be894ede4 Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.888334 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" event={"ID":"066b2b93-4946-44cf-9757-05c8282cb7a3","Type":"ContainerStarted","Data":"fb99d447e5189720ac881b538d20b70d4e3aef55d12b3a424d01a9dc39152640"} Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.903881 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-6dlwj_5d8acfc6-0334-4294-8dd6-c3091ebb69d3/cluster-samples-operator/0.log" Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.903934 4593 generic.go:334] "Generic (PLEG): container finished" podID="5d8acfc6-0334-4294-8dd6-c3091ebb69d3" containerID="bf865df54dd7eea44cdf14782b35051e879ed53d58fecdb5dfaaad1b3e3ed384" exitCode=2 Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.904003 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj" event={"ID":"5d8acfc6-0334-4294-8dd6-c3091ebb69d3","Type":"ContainerDied","Data":"bf865df54dd7eea44cdf14782b35051e879ed53d58fecdb5dfaaad1b3e3ed384"} Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.904559 4593 scope.go:117] "RemoveContainer" containerID="bf865df54dd7eea44cdf14782b35051e879ed53d58fecdb5dfaaad1b3e3ed384" Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.922243 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tm7d7" event={"ID":"7ba9e41c-b01a-4d45-9272-24aca728f7bc","Type":"ContainerStarted","Data":"f8947bf8603825421d7767efdebe3e5aa280154ddb0198dabfc109bfedbfab57"} Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.945842 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fb142e67-1809-4b4f-91d6-1c745a85cb13","Type":"ContainerStarted","Data":"ecf17e2b2f3453ee3e9aff90a681babab3e1dd6bb035e067992d73d5ba5adc5d"} Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.987412 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.990888 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"54e3f9bd-cf5f-4361-81b2-78571380f93f","Type":"ContainerDied","Data":"412596aea7ed79508efc55009f025bcad32104c84207c15c5c4be80493ef4961"} Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.990944 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="412596aea7ed79508efc55009f025bcad32104c84207c15c5c4be80493ef4961" Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.995418 4593 generic.go:334] "Generic (PLEG): container finished" podID="e424f176-80e8-4029-a500-097e1d9e5b1e" containerID="daec26b82fedd17793042a2543f04b2bffe9792c65bc9d01520e1daaec56238e" exitCode=0 Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.995494 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-69z82" event={"ID":"e424f176-80e8-4029-a500-097e1d9e5b1e","Type":"ContainerDied","Data":"daec26b82fedd17793042a2543f04b2bffe9792c65bc9d01520e1daaec56238e"} Jan 29 11:01:39 crc kubenswrapper[4593]: I0129 11:01:38.999726 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqhd7" event={"ID":"d3be8312-dfdd-4359-b8c8-d9b8158fdab4","Type":"ContainerStarted","Data":"e3ed61cb166abee85a5cafd4f482b1fd984051495892cd7e58f5727be894ede4"} Jan 29 11:01:39 crc kubenswrapper[4593]: I0129 11:01:39.195809 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=8.195787281 podStartE2EDuration="8.195787281s" podCreationTimestamp="2026-01-29 11:01:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:38.99332901 +0000 UTC m=+164.866363201" watchObservedRunningTime="2026-01-29 11:01:39.195787281 +0000 UTC m=+165.068821482" Jan 29 11:01:40 crc kubenswrapper[4593]: I0129 11:01:40.039989 4593 generic.go:334] "Generic (PLEG): container finished" podID="d3be8312-dfdd-4359-b8c8-d9b8158fdab4" containerID="6a9a45884a6f1cc5b501c7194e0aa2ef03b9fa8ba41ecbcea41cfa16d1d8fa17" exitCode=0 Jan 29 11:01:40 crc kubenswrapper[4593]: I0129 11:01:40.040973 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqhd7" event={"ID":"d3be8312-dfdd-4359-b8c8-d9b8158fdab4","Type":"ContainerDied","Data":"6a9a45884a6f1cc5b501c7194e0aa2ef03b9fa8ba41ecbcea41cfa16d1d8fa17"} Jan 29 11:01:40 crc kubenswrapper[4593]: I0129 11:01:40.070620 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" event={"ID":"066b2b93-4946-44cf-9757-05c8282cb7a3","Type":"ContainerStarted","Data":"b0eae5ecd0f07f39d4a301805b28646763eb88458f87677425443839cbdb4cd3"} Jan 29 11:01:40 crc kubenswrapper[4593]: I0129 11:01:40.071257 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:40 crc kubenswrapper[4593]: I0129 11:01:40.101060 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-6dlwj_5d8acfc6-0334-4294-8dd6-c3091ebb69d3/cluster-samples-operator/0.log" Jan 29 11:01:40 crc kubenswrapper[4593]: I0129 11:01:40.101155 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj" event={"ID":"5d8acfc6-0334-4294-8dd6-c3091ebb69d3","Type":"ContainerStarted","Data":"77e1d9df33f67ff19f8f03931cc533ad69f68170903f08b1a53a441097e413ab"} Jan 29 11:01:40 crc kubenswrapper[4593]: I0129 11:01:40.119365 4593 generic.go:334] "Generic (PLEG): container finished" podID="7ba9e41c-b01a-4d45-9272-24aca728f7bc" containerID="3d931ac31836dde066a45b4cd0a61a0a245f5279e75d2cf3230380f6b7a7f2dc" exitCode=0 Jan 29 11:01:40 crc kubenswrapper[4593]: I0129 11:01:40.119452 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tm7d7" event={"ID":"7ba9e41c-b01a-4d45-9272-24aca728f7bc","Type":"ContainerDied","Data":"3d931ac31836dde066a45b4cd0a61a0a245f5279e75d2cf3230380f6b7a7f2dc"} Jan 29 11:01:40 crc kubenswrapper[4593]: I0129 11:01:40.149610 4593 generic.go:334] "Generic (PLEG): container finished" podID="fb142e67-1809-4b4f-91d6-1c745a85cb13" containerID="ecf17e2b2f3453ee3e9aff90a681babab3e1dd6bb035e067992d73d5ba5adc5d" exitCode=0 Jan 29 11:01:40 crc kubenswrapper[4593]: I0129 11:01:40.149851 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fb142e67-1809-4b4f-91d6-1c745a85cb13","Type":"ContainerDied","Data":"ecf17e2b2f3453ee3e9aff90a681babab3e1dd6bb035e067992d73d5ba5adc5d"} Jan 29 11:01:40 crc kubenswrapper[4593]: I0129 11:01:40.150096 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" podStartSLOduration=144.150074975 podStartE2EDuration="2m24.150074975s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:40.146104594 +0000 UTC m=+166.019138785" watchObservedRunningTime="2026-01-29 11:01:40.150074975 +0000 UTC m=+166.023109186" Jan 29 11:01:40 crc kubenswrapper[4593]: I0129 11:01:40.354423 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7jm9m"] Jan 29 11:01:40 crc kubenswrapper[4593]: W0129 11:01:40.642183 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d229804_724c_4e21_89ac_e3369b615389.slice/crio-04db1eed2da4d96703a3194f7b01cfd7fc3b83eac5568838511496301cc46944 WatchSource:0}: Error finding container 04db1eed2da4d96703a3194f7b01cfd7fc3b83eac5568838511496301cc46944: Status 404 returned error can't find the container with id 04db1eed2da4d96703a3194f7b01cfd7fc3b83eac5568838511496301cc46944 Jan 29 11:01:41 crc kubenswrapper[4593]: I0129 11:01:41.174088 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" event={"ID":"7d229804-724c-4e21-89ac-e3369b615389","Type":"ContainerStarted","Data":"04db1eed2da4d96703a3194f7b01cfd7fc3b83eac5568838511496301cc46944"} Jan 29 11:01:43 crc kubenswrapper[4593]: I0129 11:01:43.349930 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 11:01:43 crc kubenswrapper[4593]: I0129 11:01:43.397242 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb142e67-1809-4b4f-91d6-1c745a85cb13-kubelet-dir\") pod \"fb142e67-1809-4b4f-91d6-1c745a85cb13\" (UID: \"fb142e67-1809-4b4f-91d6-1c745a85cb13\") " Jan 29 11:01:43 crc kubenswrapper[4593]: I0129 11:01:43.397551 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb142e67-1809-4b4f-91d6-1c745a85cb13-kube-api-access\") pod \"fb142e67-1809-4b4f-91d6-1c745a85cb13\" (UID: \"fb142e67-1809-4b4f-91d6-1c745a85cb13\") " Jan 29 11:01:43 crc kubenswrapper[4593]: I0129 11:01:43.397804 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb142e67-1809-4b4f-91d6-1c745a85cb13-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fb142e67-1809-4b4f-91d6-1c745a85cb13" (UID: "fb142e67-1809-4b4f-91d6-1c745a85cb13"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:01:43 crc kubenswrapper[4593]: I0129 11:01:43.420805 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb142e67-1809-4b4f-91d6-1c745a85cb13-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fb142e67-1809-4b4f-91d6-1c745a85cb13" (UID: "fb142e67-1809-4b4f-91d6-1c745a85cb13"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:01:43 crc kubenswrapper[4593]: I0129 11:01:43.512611 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb142e67-1809-4b4f-91d6-1c745a85cb13-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 11:01:43 crc kubenswrapper[4593]: I0129 11:01:43.512669 4593 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb142e67-1809-4b4f-91d6-1c745a85cb13-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 11:01:43 crc kubenswrapper[4593]: I0129 11:01:43.806494 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" event={"ID":"7d229804-724c-4e21-89ac-e3369b615389","Type":"ContainerStarted","Data":"ec8e97d41005702c44c8ae632aed99d0a195511509305f7d4be2f5e066d8e1d4"} Jan 29 11:01:43 crc kubenswrapper[4593]: I0129 11:01:43.808225 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fb142e67-1809-4b4f-91d6-1c745a85cb13","Type":"ContainerDied","Data":"d087ad64cc04cfb6e08198781d86078cbfc1c528676b8d2ad6b759271367d41c"} Jan 29 11:01:43 crc kubenswrapper[4593]: I0129 11:01:43.808280 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d087ad64cc04cfb6e08198781d86078cbfc1c528676b8d2ad6b759271367d41c" Jan 29 11:01:43 crc kubenswrapper[4593]: I0129 11:01:43.808547 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 11:01:45 crc kubenswrapper[4593]: I0129 11:01:45.278736 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-7jm9m" podStartSLOduration=149.278701048 podStartE2EDuration="2m29.278701048s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:45.274312846 +0000 UTC m=+171.147347037" watchObservedRunningTime="2026-01-29 11:01:45.278701048 +0000 UTC m=+171.151735249" Jan 29 11:01:46 crc kubenswrapper[4593]: I0129 11:01:46.351859 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" event={"ID":"7d229804-724c-4e21-89ac-e3369b615389","Type":"ContainerStarted","Data":"1cd590312d706f079cabb1272de333cf8d0b3327dd3dd6d04fccf2db0a4c47d9"} Jan 29 11:01:47 crc kubenswrapper[4593]: I0129 11:01:47.217913 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:47 crc kubenswrapper[4593]: I0129 11:01:47.223448 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:47 crc kubenswrapper[4593]: I0129 11:01:47.936800 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:01:47 crc kubenswrapper[4593]: I0129 11:01:47.937213 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:01:47 crc kubenswrapper[4593]: I0129 11:01:47.936820 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:01:47 crc kubenswrapper[4593]: I0129 11:01:47.942844 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:01:47 crc kubenswrapper[4593]: I0129 11:01:47.942883 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-t7wn4" Jan 29 11:01:47 crc kubenswrapper[4593]: I0129 11:01:47.943482 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"bf8a806e158e09e0a95b0c27cb110aaca87b007cd6e7c7a21d47ef28df322017"} pod="openshift-console/downloads-7954f5f757-t7wn4" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 29 11:01:47 crc kubenswrapper[4593]: I0129 11:01:47.943552 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" containerID="cri-o://bf8a806e158e09e0a95b0c27cb110aaca87b007cd6e7c7a21d47ef28df322017" gracePeriod=2 Jan 29 11:01:47 crc kubenswrapper[4593]: I0129 11:01:47.947607 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:01:47 crc kubenswrapper[4593]: I0129 11:01:47.947660 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:01:48 crc kubenswrapper[4593]: I0129 11:01:48.483253 4593 generic.go:334] "Generic (PLEG): container finished" podID="fa5b3597-636e-4cf0-ad99-755378e23867" containerID="bf8a806e158e09e0a95b0c27cb110aaca87b007cd6e7c7a21d47ef28df322017" exitCode=0 Jan 29 11:01:48 crc kubenswrapper[4593]: I0129 11:01:48.483305 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-t7wn4" event={"ID":"fa5b3597-636e-4cf0-ad99-755378e23867","Type":"ContainerDied","Data":"bf8a806e158e09e0a95b0c27cb110aaca87b007cd6e7c7a21d47ef28df322017"} Jan 29 11:01:50 crc kubenswrapper[4593]: I0129 11:01:50.853497 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-t7wn4" event={"ID":"fa5b3597-636e-4cf0-ad99-755378e23867","Type":"ContainerStarted","Data":"80496a0fb2ae3b38d3deddb71735982766589c1b4efad0d47eec09bc50b5dc63"} Jan 29 11:01:50 crc kubenswrapper[4593]: I0129 11:01:50.856351 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-t7wn4" Jan 29 11:01:50 crc kubenswrapper[4593]: I0129 11:01:50.856721 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:01:50 crc kubenswrapper[4593]: I0129 11:01:50.856770 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:01:52 crc kubenswrapper[4593]: I0129 11:01:52.140304 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:01:52 crc kubenswrapper[4593]: I0129 11:01:52.140604 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:01:53 crc kubenswrapper[4593]: I0129 11:01:53.168035 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:01:53 crc kubenswrapper[4593]: I0129 11:01:53.168136 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:01:56 crc kubenswrapper[4593]: I0129 11:01:56.178090 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:57 crc kubenswrapper[4593]: I0129 11:01:57.768413 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:01:57 crc kubenswrapper[4593]: I0129 11:01:57.768475 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:01:57 crc kubenswrapper[4593]: I0129 11:01:57.768952 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:01:57 crc kubenswrapper[4593]: I0129 11:01:57.768971 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:01:57 crc kubenswrapper[4593]: I0129 11:01:57.989145 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr" Jan 29 11:01:59 crc kubenswrapper[4593]: I0129 11:01:59.505452 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9td98"] Jan 29 11:01:59 crc kubenswrapper[4593]: I0129 11:01:59.505956 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" podUID="76a22425-a78d-4304-b158-f577c6ef4c4f" containerName="controller-manager" containerID="cri-o://9eac3a17a0d80747b4c19589283eedb53fbdc19757a21659394b8e0db2f8d72d" gracePeriod=30 Jan 29 11:01:59 crc kubenswrapper[4593]: I0129 11:01:59.617609 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h"] Jan 29 11:01:59 crc kubenswrapper[4593]: I0129 11:01:59.617858 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" podUID="a62104dd-d659-409a-b8f5-85aaf2856a14" containerName="route-controller-manager" containerID="cri-o://acbb97693467425ef2ea6a339415e6dda1d0d67a81e3c8acbbbd9196103ea943" gracePeriod=30 Jan 29 11:02:00 crc kubenswrapper[4593]: I0129 11:02:00.782584 4593 generic.go:334] "Generic (PLEG): container finished" podID="a62104dd-d659-409a-b8f5-85aaf2856a14" containerID="acbb97693467425ef2ea6a339415e6dda1d0d67a81e3c8acbbbd9196103ea943" exitCode=0 Jan 29 11:02:00 crc kubenswrapper[4593]: I0129 11:02:00.782720 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" event={"ID":"a62104dd-d659-409a-b8f5-85aaf2856a14","Type":"ContainerDied","Data":"acbb97693467425ef2ea6a339415e6dda1d0d67a81e3c8acbbbd9196103ea943"} Jan 29 11:02:00 crc kubenswrapper[4593]: I0129 11:02:00.805835 4593 generic.go:334] "Generic (PLEG): container finished" podID="76a22425-a78d-4304-b158-f577c6ef4c4f" containerID="9eac3a17a0d80747b4c19589283eedb53fbdc19757a21659394b8e0db2f8d72d" exitCode=0 Jan 29 11:02:00 crc kubenswrapper[4593]: I0129 11:02:00.805876 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" event={"ID":"76a22425-a78d-4304-b158-f577c6ef4c4f","Type":"ContainerDied","Data":"9eac3a17a0d80747b4c19589283eedb53fbdc19757a21659394b8e0db2f8d72d"} Jan 29 11:02:00 crc kubenswrapper[4593]: I0129 11:02:00.875876 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.196261 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6b89555d5-2xdxb"] Jan 29 11:02:01 crc kubenswrapper[4593]: E0129 11:02:01.196530 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54e3f9bd-cf5f-4361-81b2-78571380f93f" containerName="pruner" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.196545 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="54e3f9bd-cf5f-4361-81b2-78571380f93f" containerName="pruner" Jan 29 11:02:01 crc kubenswrapper[4593]: E0129 11:02:01.196566 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76a22425-a78d-4304-b158-f577c6ef4c4f" containerName="controller-manager" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.196574 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="76a22425-a78d-4304-b158-f577c6ef4c4f" containerName="controller-manager" Jan 29 11:02:01 crc kubenswrapper[4593]: E0129 11:02:01.196591 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb142e67-1809-4b4f-91d6-1c745a85cb13" containerName="pruner" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.196598 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb142e67-1809-4b4f-91d6-1c745a85cb13" containerName="pruner" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.196742 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb142e67-1809-4b4f-91d6-1c745a85cb13" containerName="pruner" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.196758 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="76a22425-a78d-4304-b158-f577c6ef4c4f" containerName="controller-manager" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.196772 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="54e3f9bd-cf5f-4361-81b2-78571380f93f" containerName="pruner" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.197153 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b89555d5-2xdxb"] Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.197250 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.223945 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.251032 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76a22425-a78d-4304-b158-f577c6ef4c4f-serving-cert\") pod \"76a22425-a78d-4304-b158-f577c6ef4c4f\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.251097 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m95zx\" (UniqueName: \"kubernetes.io/projected/76a22425-a78d-4304-b158-f577c6ef4c4f-kube-api-access-m95zx\") pod \"76a22425-a78d-4304-b158-f577c6ef4c4f\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.251140 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-proxy-ca-bundles\") pod \"76a22425-a78d-4304-b158-f577c6ef4c4f\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.251204 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-client-ca\") pod \"76a22425-a78d-4304-b158-f577c6ef4c4f\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.251233 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-config\") pod \"76a22425-a78d-4304-b158-f577c6ef4c4f\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.252786 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "76a22425-a78d-4304-b158-f577c6ef4c4f" (UID: "76a22425-a78d-4304-b158-f577c6ef4c4f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.253408 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-client-ca" (OuterVolumeSpecName: "client-ca") pod "76a22425-a78d-4304-b158-f577c6ef4c4f" (UID: "76a22425-a78d-4304-b158-f577c6ef4c4f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.254267 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-config" (OuterVolumeSpecName: "config") pod "76a22425-a78d-4304-b158-f577c6ef4c4f" (UID: "76a22425-a78d-4304-b158-f577c6ef4c4f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.309161 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76a22425-a78d-4304-b158-f577c6ef4c4f-kube-api-access-m95zx" (OuterVolumeSpecName: "kube-api-access-m95zx") pod "76a22425-a78d-4304-b158-f577c6ef4c4f" (UID: "76a22425-a78d-4304-b158-f577c6ef4c4f"). InnerVolumeSpecName "kube-api-access-m95zx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.352425 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a62104dd-d659-409a-b8f5-85aaf2856a14-config\") pod \"a62104dd-d659-409a-b8f5-85aaf2856a14\" (UID: \"a62104dd-d659-409a-b8f5-85aaf2856a14\") " Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.352555 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a62104dd-d659-409a-b8f5-85aaf2856a14-serving-cert\") pod \"a62104dd-d659-409a-b8f5-85aaf2856a14\" (UID: \"a62104dd-d659-409a-b8f5-85aaf2856a14\") " Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.352708 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a62104dd-d659-409a-b8f5-85aaf2856a14-client-ca\") pod \"a62104dd-d659-409a-b8f5-85aaf2856a14\" (UID: \"a62104dd-d659-409a-b8f5-85aaf2856a14\") " Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.352998 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2fn6\" (UniqueName: \"kubernetes.io/projected/a62104dd-d659-409a-b8f5-85aaf2856a14-kube-api-access-q2fn6\") pod \"a62104dd-d659-409a-b8f5-85aaf2856a14\" (UID: \"a62104dd-d659-409a-b8f5-85aaf2856a14\") " Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.353520 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-config\") pod \"controller-manager-6b89555d5-2xdxb\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.353580 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4f38956-d909-4b11-8617-fd9fdcc92e10-serving-cert\") pod \"controller-manager-6b89555d5-2xdxb\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.353620 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-proxy-ca-bundles\") pod \"controller-manager-6b89555d5-2xdxb\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.354012 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cb9g\" (UniqueName: \"kubernetes.io/projected/a4f38956-d909-4b11-8617-fd9fdcc92e10-kube-api-access-8cb9g\") pod \"controller-manager-6b89555d5-2xdxb\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.354191 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-client-ca\") pod \"controller-manager-6b89555d5-2xdxb\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.354396 4593 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.354410 4593 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.354421 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.354436 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m95zx\" (UniqueName: \"kubernetes.io/projected/76a22425-a78d-4304-b158-f577c6ef4c4f-kube-api-access-m95zx\") on node \"crc\" DevicePath \"\"" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.355429 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a62104dd-d659-409a-b8f5-85aaf2856a14-config" (OuterVolumeSpecName: "config") pod "a62104dd-d659-409a-b8f5-85aaf2856a14" (UID: "a62104dd-d659-409a-b8f5-85aaf2856a14"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.388089 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76a22425-a78d-4304-b158-f577c6ef4c4f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "76a22425-a78d-4304-b158-f577c6ef4c4f" (UID: "76a22425-a78d-4304-b158-f577c6ef4c4f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.388417 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a62104dd-d659-409a-b8f5-85aaf2856a14-client-ca" (OuterVolumeSpecName: "client-ca") pod "a62104dd-d659-409a-b8f5-85aaf2856a14" (UID: "a62104dd-d659-409a-b8f5-85aaf2856a14"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.396003 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a62104dd-d659-409a-b8f5-85aaf2856a14-kube-api-access-q2fn6" (OuterVolumeSpecName: "kube-api-access-q2fn6") pod "a62104dd-d659-409a-b8f5-85aaf2856a14" (UID: "a62104dd-d659-409a-b8f5-85aaf2856a14"). InnerVolumeSpecName "kube-api-access-q2fn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.396541 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a62104dd-d659-409a-b8f5-85aaf2856a14-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a62104dd-d659-409a-b8f5-85aaf2856a14" (UID: "a62104dd-d659-409a-b8f5-85aaf2856a14"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.477071 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-client-ca\") pod \"controller-manager-6b89555d5-2xdxb\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.477184 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-config\") pod \"controller-manager-6b89555d5-2xdxb\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.477211 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4f38956-d909-4b11-8617-fd9fdcc92e10-serving-cert\") pod \"controller-manager-6b89555d5-2xdxb\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.477244 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-proxy-ca-bundles\") pod \"controller-manager-6b89555d5-2xdxb\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.477273 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cb9g\" (UniqueName: \"kubernetes.io/projected/a4f38956-d909-4b11-8617-fd9fdcc92e10-kube-api-access-8cb9g\") pod \"controller-manager-6b89555d5-2xdxb\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.477329 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a62104dd-d659-409a-b8f5-85aaf2856a14-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.477343 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a62104dd-d659-409a-b8f5-85aaf2856a14-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.477355 4593 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a62104dd-d659-409a-b8f5-85aaf2856a14-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.477366 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76a22425-a78d-4304-b158-f577c6ef4c4f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.477377 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2fn6\" (UniqueName: \"kubernetes.io/projected/a62104dd-d659-409a-b8f5-85aaf2856a14-kube-api-access-q2fn6\") on node \"crc\" DevicePath \"\"" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.478610 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-client-ca\") pod \"controller-manager-6b89555d5-2xdxb\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.479941 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-proxy-ca-bundles\") pod \"controller-manager-6b89555d5-2xdxb\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.481505 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4f38956-d909-4b11-8617-fd9fdcc92e10-serving-cert\") pod \"controller-manager-6b89555d5-2xdxb\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.491866 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-config\") pod \"controller-manager-6b89555d5-2xdxb\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.520093 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cb9g\" (UniqueName: \"kubernetes.io/projected/a4f38956-d909-4b11-8617-fd9fdcc92e10-kube-api-access-8cb9g\") pod \"controller-manager-6b89555d5-2xdxb\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.571682 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.872197 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.872846 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" event={"ID":"a62104dd-d659-409a-b8f5-85aaf2856a14","Type":"ContainerDied","Data":"9eed55ee0a88f35fc2bf20b9123f7aae8a2cd1091b8b30b1223e2725c98e46d9"} Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.872943 4593 scope.go:117] "RemoveContainer" containerID="acbb97693467425ef2ea6a339415e6dda1d0d67a81e3c8acbbbd9196103ea943" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.879564 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" event={"ID":"76a22425-a78d-4304-b158-f577c6ef4c4f","Type":"ContainerDied","Data":"334a01364083a20e9cff55591ab0397980e71497fd4d2b540c48088a18808a8d"} Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.879695 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.921414 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9td98"] Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.927604 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9td98"] Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.935077 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h"] Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.939209 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h"] Jan 29 11:02:02 crc kubenswrapper[4593]: I0129 11:02:02.152595 4593 scope.go:117] "RemoveContainer" containerID="9eac3a17a0d80747b4c19589283eedb53fbdc19757a21659394b8e0db2f8d72d" Jan 29 11:02:02 crc kubenswrapper[4593]: I0129 11:02:02.487784 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b89555d5-2xdxb"] Jan 29 11:02:02 crc kubenswrapper[4593]: I0129 11:02:02.968778 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" event={"ID":"a4f38956-d909-4b11-8617-fd9fdcc92e10","Type":"ContainerStarted","Data":"21f6b5d0c55de6d3ac91b432cc366d4adadbf13bd4e64cace71084fab1fad375"} Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.101349 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76a22425-a78d-4304-b158-f577c6ef4c4f" path="/var/lib/kubelet/pods/76a22425-a78d-4304-b158-f577c6ef4c4f/volumes" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.102253 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a62104dd-d659-409a-b8f5-85aaf2856a14" path="/var/lib/kubelet/pods/a62104dd-d659-409a-b8f5-85aaf2856a14/volumes" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.359804 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb"] Jan 29 11:02:03 crc kubenswrapper[4593]: E0129 11:02:03.360079 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62104dd-d659-409a-b8f5-85aaf2856a14" containerName="route-controller-manager" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.360104 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62104dd-d659-409a-b8f5-85aaf2856a14" containerName="route-controller-manager" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.360233 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62104dd-d659-409a-b8f5-85aaf2856a14" containerName="route-controller-manager" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.360726 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.363433 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.363611 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.363836 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.363957 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.364683 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.367958 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.381898 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb"] Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.462661 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwlfj\" (UniqueName: \"kubernetes.io/projected/f4378129-7124-43d0-a1a0-4085d0213d85-kube-api-access-rwlfj\") pod \"route-controller-manager-6d497cc759-5d7sb\" (UID: \"f4378129-7124-43d0-a1a0-4085d0213d85\") " pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.462825 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4378129-7124-43d0-a1a0-4085d0213d85-client-ca\") pod \"route-controller-manager-6d497cc759-5d7sb\" (UID: \"f4378129-7124-43d0-a1a0-4085d0213d85\") " pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.463008 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4378129-7124-43d0-a1a0-4085d0213d85-serving-cert\") pod \"route-controller-manager-6d497cc759-5d7sb\" (UID: \"f4378129-7124-43d0-a1a0-4085d0213d85\") " pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.463088 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4378129-7124-43d0-a1a0-4085d0213d85-config\") pod \"route-controller-manager-6d497cc759-5d7sb\" (UID: \"f4378129-7124-43d0-a1a0-4085d0213d85\") " pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.564499 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwlfj\" (UniqueName: \"kubernetes.io/projected/f4378129-7124-43d0-a1a0-4085d0213d85-kube-api-access-rwlfj\") pod \"route-controller-manager-6d497cc759-5d7sb\" (UID: \"f4378129-7124-43d0-a1a0-4085d0213d85\") " pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.570038 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4378129-7124-43d0-a1a0-4085d0213d85-client-ca\") pod \"route-controller-manager-6d497cc759-5d7sb\" (UID: \"f4378129-7124-43d0-a1a0-4085d0213d85\") " pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.578478 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4378129-7124-43d0-a1a0-4085d0213d85-serving-cert\") pod \"route-controller-manager-6d497cc759-5d7sb\" (UID: \"f4378129-7124-43d0-a1a0-4085d0213d85\") " pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.578520 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4378129-7124-43d0-a1a0-4085d0213d85-config\") pod \"route-controller-manager-6d497cc759-5d7sb\" (UID: \"f4378129-7124-43d0-a1a0-4085d0213d85\") " pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.579464 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4378129-7124-43d0-a1a0-4085d0213d85-config\") pod \"route-controller-manager-6d497cc759-5d7sb\" (UID: \"f4378129-7124-43d0-a1a0-4085d0213d85\") " pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.578346 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4378129-7124-43d0-a1a0-4085d0213d85-client-ca\") pod \"route-controller-manager-6d497cc759-5d7sb\" (UID: \"f4378129-7124-43d0-a1a0-4085d0213d85\") " pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.604819 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwlfj\" (UniqueName: \"kubernetes.io/projected/f4378129-7124-43d0-a1a0-4085d0213d85-kube-api-access-rwlfj\") pod \"route-controller-manager-6d497cc759-5d7sb\" (UID: \"f4378129-7124-43d0-a1a0-4085d0213d85\") " pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.605044 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4378129-7124-43d0-a1a0-4085d0213d85-serving-cert\") pod \"route-controller-manager-6d497cc759-5d7sb\" (UID: \"f4378129-7124-43d0-a1a0-4085d0213d85\") " pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.700065 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:02:04 crc kubenswrapper[4593]: I0129 11:02:03.998683 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:02:04 crc kubenswrapper[4593]: I0129 11:02:04.003808 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:02:04 crc kubenswrapper[4593]: I0129 11:02:04.281469 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:02:04 crc kubenswrapper[4593]: I0129 11:02:04.297083 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" event={"ID":"a4f38956-d909-4b11-8617-fd9fdcc92e10","Type":"ContainerStarted","Data":"98ead1bf2f822aebadbb849468a6ff6ad9ad4689b0f1f94453177be952a2be7c"} Jan 29 11:02:05 crc kubenswrapper[4593]: I0129 11:02:05.639615 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:05 crc kubenswrapper[4593]: I0129 11:02:05.928872 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:05 crc kubenswrapper[4593]: I0129 11:02:05.984888 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" podStartSLOduration=6.984871912 podStartE2EDuration="6.984871912s" podCreationTimestamp="2026-01-29 11:01:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:02:05.926901831 +0000 UTC m=+191.799936022" watchObservedRunningTime="2026-01-29 11:02:05.984871912 +0000 UTC m=+191.857906093" Jan 29 11:02:06 crc kubenswrapper[4593]: I0129 11:02:06.988943 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb"] Jan 29 11:02:07 crc kubenswrapper[4593]: I0129 11:02:07.802865 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:02:07 crc kubenswrapper[4593]: I0129 11:02:07.803081 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:02:07 crc kubenswrapper[4593]: I0129 11:02:07.803611 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:02:07 crc kubenswrapper[4593]: I0129 11:02:07.803647 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:02:14 crc kubenswrapper[4593]: I0129 11:02:14.545210 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 11:02:14 crc kubenswrapper[4593]: I0129 11:02:14.546534 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 11:02:14 crc kubenswrapper[4593]: I0129 11:02:14.549997 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 11:02:14 crc kubenswrapper[4593]: I0129 11:02:14.550181 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 11:02:14 crc kubenswrapper[4593]: I0129 11:02:14.557926 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 11:02:14 crc kubenswrapper[4593]: I0129 11:02:14.568815 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e47dc9d-9af5-4d14-b8f3-f227d93c792d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1e47dc9d-9af5-4d14-b8f3-f227d93c792d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 11:02:14 crc kubenswrapper[4593]: I0129 11:02:14.569083 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1e47dc9d-9af5-4d14-b8f3-f227d93c792d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1e47dc9d-9af5-4d14-b8f3-f227d93c792d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 11:02:14 crc kubenswrapper[4593]: I0129 11:02:14.678046 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1e47dc9d-9af5-4d14-b8f3-f227d93c792d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1e47dc9d-9af5-4d14-b8f3-f227d93c792d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 11:02:14 crc kubenswrapper[4593]: I0129 11:02:14.678160 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e47dc9d-9af5-4d14-b8f3-f227d93c792d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1e47dc9d-9af5-4d14-b8f3-f227d93c792d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 11:02:14 crc kubenswrapper[4593]: I0129 11:02:14.678748 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1e47dc9d-9af5-4d14-b8f3-f227d93c792d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1e47dc9d-9af5-4d14-b8f3-f227d93c792d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 11:02:14 crc kubenswrapper[4593]: I0129 11:02:14.715755 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e47dc9d-9af5-4d14-b8f3-f227d93c792d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1e47dc9d-9af5-4d14-b8f3-f227d93c792d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 11:02:14 crc kubenswrapper[4593]: I0129 11:02:14.875487 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 11:02:17 crc kubenswrapper[4593]: I0129 11:02:17.889759 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:02:17 crc kubenswrapper[4593]: I0129 11:02:17.890353 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:02:17 crc kubenswrapper[4593]: I0129 11:02:17.890405 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-t7wn4" Jan 29 11:02:17 crc kubenswrapper[4593]: I0129 11:02:17.889800 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:02:17 crc kubenswrapper[4593]: I0129 11:02:17.890774 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:02:17 crc kubenswrapper[4593]: I0129 11:02:17.891089 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"80496a0fb2ae3b38d3deddb71735982766589c1b4efad0d47eec09bc50b5dc63"} pod="openshift-console/downloads-7954f5f757-t7wn4" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 29 11:02:17 crc kubenswrapper[4593]: I0129 11:02:17.891122 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" containerID="cri-o://80496a0fb2ae3b38d3deddb71735982766589c1b4efad0d47eec09bc50b5dc63" gracePeriod=2 Jan 29 11:02:17 crc kubenswrapper[4593]: I0129 11:02:17.891146 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:02:17 crc kubenswrapper[4593]: I0129 11:02:17.891202 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:02:18 crc kubenswrapper[4593]: I0129 11:02:18.664926 4593 generic.go:334] "Generic (PLEG): container finished" podID="fa5b3597-636e-4cf0-ad99-755378e23867" containerID="80496a0fb2ae3b38d3deddb71735982766589c1b4efad0d47eec09bc50b5dc63" exitCode=0 Jan 29 11:02:18 crc kubenswrapper[4593]: I0129 11:02:18.664996 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-t7wn4" event={"ID":"fa5b3597-636e-4cf0-ad99-755378e23867","Type":"ContainerDied","Data":"80496a0fb2ae3b38d3deddb71735982766589c1b4efad0d47eec09bc50b5dc63"} Jan 29 11:02:18 crc kubenswrapper[4593]: I0129 11:02:18.665294 4593 scope.go:117] "RemoveContainer" containerID="bf8a806e158e09e0a95b0c27cb110aaca87b007cd6e7c7a21d47ef28df322017" Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.357492 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b89555d5-2xdxb"] Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.358101 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" podUID="a4f38956-d909-4b11-8617-fd9fdcc92e10" containerName="controller-manager" containerID="cri-o://98ead1bf2f822aebadbb849468a6ff6ad9ad4689b0f1f94453177be952a2be7c" gracePeriod=30 Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.369742 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb"] Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.540920 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.541589 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.553579 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.713249 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c78186dc-c8e4-4018-8e50-f7fc0e719890-kube-api-access\") pod \"installer-9-crc\" (UID: \"c78186dc-c8e4-4018-8e50-f7fc0e719890\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.713349 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c78186dc-c8e4-4018-8e50-f7fc0e719890-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c78186dc-c8e4-4018-8e50-f7fc0e719890\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.713399 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c78186dc-c8e4-4018-8e50-f7fc0e719890-var-lock\") pod \"installer-9-crc\" (UID: \"c78186dc-c8e4-4018-8e50-f7fc0e719890\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.725251 4593 generic.go:334] "Generic (PLEG): container finished" podID="a4f38956-d909-4b11-8617-fd9fdcc92e10" containerID="98ead1bf2f822aebadbb849468a6ff6ad9ad4689b0f1f94453177be952a2be7c" exitCode=0 Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.725287 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" event={"ID":"a4f38956-d909-4b11-8617-fd9fdcc92e10","Type":"ContainerDied","Data":"98ead1bf2f822aebadbb849468a6ff6ad9ad4689b0f1f94453177be952a2be7c"} Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.815001 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c78186dc-c8e4-4018-8e50-f7fc0e719890-var-lock\") pod \"installer-9-crc\" (UID: \"c78186dc-c8e4-4018-8e50-f7fc0e719890\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.814925 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c78186dc-c8e4-4018-8e50-f7fc0e719890-var-lock\") pod \"installer-9-crc\" (UID: \"c78186dc-c8e4-4018-8e50-f7fc0e719890\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.815523 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c78186dc-c8e4-4018-8e50-f7fc0e719890-kube-api-access\") pod \"installer-9-crc\" (UID: \"c78186dc-c8e4-4018-8e50-f7fc0e719890\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.815692 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c78186dc-c8e4-4018-8e50-f7fc0e719890-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c78186dc-c8e4-4018-8e50-f7fc0e719890\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.815877 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c78186dc-c8e4-4018-8e50-f7fc0e719890-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c78186dc-c8e4-4018-8e50-f7fc0e719890\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.851030 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c78186dc-c8e4-4018-8e50-f7fc0e719890-kube-api-access\") pod \"installer-9-crc\" (UID: \"c78186dc-c8e4-4018-8e50-f7fc0e719890\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.925027 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:02:21 crc kubenswrapper[4593]: I0129 11:02:21.573755 4593 patch_prober.go:28] interesting pod/controller-manager-6b89555d5-2xdxb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Jan 29 11:02:21 crc kubenswrapper[4593]: I0129 11:02:21.573827 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" podUID="a4f38956-d909-4b11-8617-fd9fdcc92e10" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Jan 29 11:02:27 crc kubenswrapper[4593]: I0129 11:02:27.799773 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:02:27 crc kubenswrapper[4593]: I0129 11:02:27.800403 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:02:31 crc kubenswrapper[4593]: I0129 11:02:31.572684 4593 patch_prober.go:28] interesting pod/controller-manager-6b89555d5-2xdxb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Jan 29 11:02:31 crc kubenswrapper[4593]: I0129 11:02:31.572992 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" podUID="a4f38956-d909-4b11-8617-fd9fdcc92e10" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Jan 29 11:02:31 crc kubenswrapper[4593]: W0129 11:02:31.732951 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4378129_7124_43d0_a1a0_4085d0213d85.slice/crio-4b801c5d5fcdc244600a5adf83fd979dc53a8e86763b672bd2bec0c0db5bb502 WatchSource:0}: Error finding container 4b801c5d5fcdc244600a5adf83fd979dc53a8e86763b672bd2bec0c0db5bb502: Status 404 returned error can't find the container with id 4b801c5d5fcdc244600a5adf83fd979dc53a8e86763b672bd2bec0c0db5bb502 Jan 29 11:02:32 crc kubenswrapper[4593]: I0129 11:02:32.049562 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" event={"ID":"f4378129-7124-43d0-a1a0-4085d0213d85","Type":"ContainerStarted","Data":"4b801c5d5fcdc244600a5adf83fd979dc53a8e86763b672bd2bec0c0db5bb502"} Jan 29 11:02:33 crc kubenswrapper[4593]: I0129 11:02:33.946539 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:02:33 crc kubenswrapper[4593]: I0129 11:02:33.946928 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:02:33 crc kubenswrapper[4593]: I0129 11:02:33.946973 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 11:02:33 crc kubenswrapper[4593]: I0129 11:02:33.947620 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:02:33 crc kubenswrapper[4593]: I0129 11:02:33.947680 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a" gracePeriod=600 Jan 29 11:02:35 crc kubenswrapper[4593]: I0129 11:02:35.210530 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a" exitCode=0 Jan 29 11:02:35 crc kubenswrapper[4593]: I0129 11:02:35.210584 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a"} Jan 29 11:02:37 crc kubenswrapper[4593]: I0129 11:02:37.768514 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:02:37 crc kubenswrapper[4593]: I0129 11:02:37.768900 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.183707 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jntfl"] Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.185163 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.201465 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jntfl"] Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.238586 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0fc17831-117a-497d-bc13-b48ed5d95c90-registry-tls\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.238878 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fc17831-117a-497d-bc13-b48ed5d95c90-trusted-ca\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.239005 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0fc17831-117a-497d-bc13-b48ed5d95c90-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.239115 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv2j4\" (UniqueName: \"kubernetes.io/projected/0fc17831-117a-497d-bc13-b48ed5d95c90-kube-api-access-zv2j4\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.239217 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.239295 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0fc17831-117a-497d-bc13-b48ed5d95c90-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.239365 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fc17831-117a-497d-bc13-b48ed5d95c90-bound-sa-token\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.239453 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0fc17831-117a-497d-bc13-b48ed5d95c90-registry-certificates\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.276395 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.340252 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fc17831-117a-497d-bc13-b48ed5d95c90-bound-sa-token\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.340328 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0fc17831-117a-497d-bc13-b48ed5d95c90-registry-certificates\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.340356 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0fc17831-117a-497d-bc13-b48ed5d95c90-registry-tls\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.340383 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fc17831-117a-497d-bc13-b48ed5d95c90-trusted-ca\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.340443 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0fc17831-117a-497d-bc13-b48ed5d95c90-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.340471 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv2j4\" (UniqueName: \"kubernetes.io/projected/0fc17831-117a-497d-bc13-b48ed5d95c90-kube-api-access-zv2j4\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.340527 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0fc17831-117a-497d-bc13-b48ed5d95c90-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.341308 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0fc17831-117a-497d-bc13-b48ed5d95c90-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.343148 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0fc17831-117a-497d-bc13-b48ed5d95c90-registry-certificates\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.366445 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fc17831-117a-497d-bc13-b48ed5d95c90-trusted-ca\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.367076 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0fc17831-117a-497d-bc13-b48ed5d95c90-registry-tls\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.368959 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fc17831-117a-497d-bc13-b48ed5d95c90-bound-sa-token\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.369484 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0fc17831-117a-497d-bc13-b48ed5d95c90-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.398250 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv2j4\" (UniqueName: \"kubernetes.io/projected/0fc17831-117a-497d-bc13-b48ed5d95c90-kube-api-access-zv2j4\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.506729 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.573312 4593 patch_prober.go:28] interesting pod/controller-manager-6b89555d5-2xdxb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.573394 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" podUID="a4f38956-d909-4b11-8617-fd9fdcc92e10" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 11:02:44 crc kubenswrapper[4593]: E0129 11:02:44.864212 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 11:02:44 crc kubenswrapper[4593]: E0129 11:02:44.865021 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t7wxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-fgg5s_openshift-marketplace(695d677a-4519-4ff0-9c6a-cbc902b00ee5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 11:02:44 crc kubenswrapper[4593]: E0129 11:02:44.866411 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-fgg5s" podUID="695d677a-4519-4ff0-9c6a-cbc902b00ee5" Jan 29 11:02:47 crc kubenswrapper[4593]: I0129 11:02:47.768042 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:02:47 crc kubenswrapper[4593]: I0129 11:02:47.768317 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:02:52 crc kubenswrapper[4593]: I0129 11:02:52.174350 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ftchp"] Jan 29 11:02:52 crc kubenswrapper[4593]: I0129 11:02:52.573287 4593 patch_prober.go:28] interesting pod/controller-manager-6b89555d5-2xdxb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 11:02:52 crc kubenswrapper[4593]: I0129 11:02:52.573620 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" podUID="a4f38956-d909-4b11-8617-fd9fdcc92e10" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 11:02:53 crc kubenswrapper[4593]: E0129 11:02:53.286897 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-fgg5s" podUID="695d677a-4519-4ff0-9c6a-cbc902b00ee5" Jan 29 11:02:53 crc kubenswrapper[4593]: E0129 11:02:53.371731 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 11:02:53 crc kubenswrapper[4593]: E0129 11:02:53.371898 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7j57m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qdz2v_openshift-marketplace(3d47516f-05e5-4f96-bf5a-c4251af51b6b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 11:02:53 crc kubenswrapper[4593]: E0129 11:02:53.373044 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-qdz2v" podUID="3d47516f-05e5-4f96-bf5a-c4251af51b6b" Jan 29 11:02:53 crc kubenswrapper[4593]: E0129 11:02:53.403334 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 11:02:53 crc kubenswrapper[4593]: E0129 11:02:53.403519 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-879j2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-tm7d7_openshift-marketplace(7ba9e41c-b01a-4d45-9272-24aca728f7bc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 11:02:53 crc kubenswrapper[4593]: E0129 11:02:53.404691 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-tm7d7" podUID="7ba9e41c-b01a-4d45-9272-24aca728f7bc" Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.402928 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fgg5s"] Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.410382 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qdz2v"] Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.419125 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lf9gr"] Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.430115 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w7gmb"] Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.441425 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hw52m"] Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.441683 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" podUID="0aa74baf-fde3-4dad-aef0-7b8b1ae90098" containerName="marketplace-operator" containerID="cri-o://134cb2e4c5ab4b63e76188908744960f17a0602be1969f5d2c5bfb52e5ef0868" gracePeriod=30 Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.447592 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-69z82"] Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.458555 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvwft"] Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.469138 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-s2rlp"] Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.470808 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-s2rlp" Jan 29 11:02:56 crc kubenswrapper[4593]: E0129 11:02:56.489152 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-tm7d7" podUID="7ba9e41c-b01a-4d45-9272-24aca728f7bc" Jan 29 11:02:56 crc kubenswrapper[4593]: E0129 11:02:56.489356 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qdz2v" podUID="3d47516f-05e5-4f96-bf5a-c4251af51b6b" Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.493664 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cqhd7"] Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.509381 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tm7d7"] Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.510706 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-s2rlp"] Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.549564 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7a59fe58-c900-46ea-8ff2-8a7f49210dc3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-s2rlp\" (UID: \"7a59fe58-c900-46ea-8ff2-8a7f49210dc3\") " pod="openshift-marketplace/marketplace-operator-79b997595-s2rlp" Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.549671 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt492\" (UniqueName: \"kubernetes.io/projected/7a59fe58-c900-46ea-8ff2-8a7f49210dc3-kube-api-access-lt492\") pod \"marketplace-operator-79b997595-s2rlp\" (UID: \"7a59fe58-c900-46ea-8ff2-8a7f49210dc3\") " pod="openshift-marketplace/marketplace-operator-79b997595-s2rlp" Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.549722 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7a59fe58-c900-46ea-8ff2-8a7f49210dc3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-s2rlp\" (UID: \"7a59fe58-c900-46ea-8ff2-8a7f49210dc3\") " pod="openshift-marketplace/marketplace-operator-79b997595-s2rlp" Jan 29 11:02:56 crc kubenswrapper[4593]: E0129 11:02:56.553288 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 11:02:56 crc kubenswrapper[4593]: E0129 11:02:56.553436 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cc45l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-69z82_openshift-marketplace(e424f176-80e8-4029-a500-097e1d9e5b1e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 11:02:56 crc kubenswrapper[4593]: E0129 11:02:56.556489 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-69z82" podUID="e424f176-80e8-4029-a500-097e1d9e5b1e" Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.650899 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7a59fe58-c900-46ea-8ff2-8a7f49210dc3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-s2rlp\" (UID: \"7a59fe58-c900-46ea-8ff2-8a7f49210dc3\") " pod="openshift-marketplace/marketplace-operator-79b997595-s2rlp" Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.650992 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt492\" (UniqueName: \"kubernetes.io/projected/7a59fe58-c900-46ea-8ff2-8a7f49210dc3-kube-api-access-lt492\") pod \"marketplace-operator-79b997595-s2rlp\" (UID: \"7a59fe58-c900-46ea-8ff2-8a7f49210dc3\") " pod="openshift-marketplace/marketplace-operator-79b997595-s2rlp" Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.651039 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7a59fe58-c900-46ea-8ff2-8a7f49210dc3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-s2rlp\" (UID: \"7a59fe58-c900-46ea-8ff2-8a7f49210dc3\") " pod="openshift-marketplace/marketplace-operator-79b997595-s2rlp" Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.652685 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7a59fe58-c900-46ea-8ff2-8a7f49210dc3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-s2rlp\" (UID: \"7a59fe58-c900-46ea-8ff2-8a7f49210dc3\") " pod="openshift-marketplace/marketplace-operator-79b997595-s2rlp" Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.661675 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7a59fe58-c900-46ea-8ff2-8a7f49210dc3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-s2rlp\" (UID: \"7a59fe58-c900-46ea-8ff2-8a7f49210dc3\") " pod="openshift-marketplace/marketplace-operator-79b997595-s2rlp" Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.673411 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt492\" (UniqueName: \"kubernetes.io/projected/7a59fe58-c900-46ea-8ff2-8a7f49210dc3-kube-api-access-lt492\") pod \"marketplace-operator-79b997595-s2rlp\" (UID: \"7a59fe58-c900-46ea-8ff2-8a7f49210dc3\") " pod="openshift-marketplace/marketplace-operator-79b997595-s2rlp" Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.760737 4593 generic.go:334] "Generic (PLEG): container finished" podID="0aa74baf-fde3-4dad-aef0-7b8b1ae90098" containerID="134cb2e4c5ab4b63e76188908744960f17a0602be1969f5d2c5bfb52e5ef0868" exitCode=0 Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.760990 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" event={"ID":"0aa74baf-fde3-4dad-aef0-7b8b1ae90098","Type":"ContainerDied","Data":"134cb2e4c5ab4b63e76188908744960f17a0602be1969f5d2c5bfb52e5ef0868"} Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.796070 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-s2rlp" Jan 29 11:02:57 crc kubenswrapper[4593]: I0129 11:02:57.769303 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:02:57 crc kubenswrapper[4593]: I0129 11:02:57.769350 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:02:57 crc kubenswrapper[4593]: I0129 11:02:57.991625 4593 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hw52m container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 29 11:02:57 crc kubenswrapper[4593]: I0129 11:02:57.991699 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" podUID="0aa74baf-fde3-4dad-aef0-7b8b1ae90098" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 29 11:02:59 crc kubenswrapper[4593]: I0129 11:02:59.922777 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:59 crc kubenswrapper[4593]: I0129 11:02:59.927126 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tm7d7" Jan 29 11:02:59 crc kubenswrapper[4593]: I0129 11:02:59.931108 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qdz2v" Jan 29 11:02:59 crc kubenswrapper[4593]: I0129 11:02:59.995599 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-69z82" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.017872 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fgg5s" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.118101 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e424f176-80e8-4029-a500-097e1d9e5b1e-catalog-content\") pod \"e424f176-80e8-4029-a500-097e1d9e5b1e\" (UID: \"e424f176-80e8-4029-a500-097e1d9e5b1e\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.118193 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e424f176-80e8-4029-a500-097e1d9e5b1e-utilities\") pod \"e424f176-80e8-4029-a500-097e1d9e5b1e\" (UID: \"e424f176-80e8-4029-a500-097e1d9e5b1e\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.118219 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ba9e41c-b01a-4d45-9272-24aca728f7bc-utilities\") pod \"7ba9e41c-b01a-4d45-9272-24aca728f7bc\" (UID: \"7ba9e41c-b01a-4d45-9272-24aca728f7bc\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.118251 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cb9g\" (UniqueName: \"kubernetes.io/projected/a4f38956-d909-4b11-8617-fd9fdcc92e10-kube-api-access-8cb9g\") pod \"a4f38956-d909-4b11-8617-fd9fdcc92e10\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.118279 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-879j2\" (UniqueName: \"kubernetes.io/projected/7ba9e41c-b01a-4d45-9272-24aca728f7bc-kube-api-access-879j2\") pod \"7ba9e41c-b01a-4d45-9272-24aca728f7bc\" (UID: \"7ba9e41c-b01a-4d45-9272-24aca728f7bc\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.118300 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d47516f-05e5-4f96-bf5a-c4251af51b6b-utilities\") pod \"3d47516f-05e5-4f96-bf5a-c4251af51b6b\" (UID: \"3d47516f-05e5-4f96-bf5a-c4251af51b6b\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.118326 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-client-ca\") pod \"a4f38956-d909-4b11-8617-fd9fdcc92e10\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.118358 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4f38956-d909-4b11-8617-fd9fdcc92e10-serving-cert\") pod \"a4f38956-d909-4b11-8617-fd9fdcc92e10\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.118386 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-proxy-ca-bundles\") pod \"a4f38956-d909-4b11-8617-fd9fdcc92e10\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.118436 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d47516f-05e5-4f96-bf5a-c4251af51b6b-catalog-content\") pod \"3d47516f-05e5-4f96-bf5a-c4251af51b6b\" (UID: \"3d47516f-05e5-4f96-bf5a-c4251af51b6b\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.118488 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j57m\" (UniqueName: \"kubernetes.io/projected/3d47516f-05e5-4f96-bf5a-c4251af51b6b-kube-api-access-7j57m\") pod \"3d47516f-05e5-4f96-bf5a-c4251af51b6b\" (UID: \"3d47516f-05e5-4f96-bf5a-c4251af51b6b\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.118513 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-config\") pod \"a4f38956-d909-4b11-8617-fd9fdcc92e10\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.118533 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc45l\" (UniqueName: \"kubernetes.io/projected/e424f176-80e8-4029-a500-097e1d9e5b1e-kube-api-access-cc45l\") pod \"e424f176-80e8-4029-a500-097e1d9e5b1e\" (UID: \"e424f176-80e8-4029-a500-097e1d9e5b1e\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.118563 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ba9e41c-b01a-4d45-9272-24aca728f7bc-catalog-content\") pod \"7ba9e41c-b01a-4d45-9272-24aca728f7bc\" (UID: \"7ba9e41c-b01a-4d45-9272-24aca728f7bc\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.119097 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ba9e41c-b01a-4d45-9272-24aca728f7bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ba9e41c-b01a-4d45-9272-24aca728f7bc" (UID: "7ba9e41c-b01a-4d45-9272-24aca728f7bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.119457 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e424f176-80e8-4029-a500-097e1d9e5b1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e424f176-80e8-4029-a500-097e1d9e5b1e" (UID: "e424f176-80e8-4029-a500-097e1d9e5b1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.120341 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e424f176-80e8-4029-a500-097e1d9e5b1e-utilities" (OuterVolumeSpecName: "utilities") pod "e424f176-80e8-4029-a500-097e1d9e5b1e" (UID: "e424f176-80e8-4029-a500-097e1d9e5b1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.121088 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ba9e41c-b01a-4d45-9272-24aca728f7bc-utilities" (OuterVolumeSpecName: "utilities") pod "7ba9e41c-b01a-4d45-9272-24aca728f7bc" (UID: "7ba9e41c-b01a-4d45-9272-24aca728f7bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.124619 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a4f38956-d909-4b11-8617-fd9fdcc92e10" (UID: "a4f38956-d909-4b11-8617-fd9fdcc92e10"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.125272 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-client-ca" (OuterVolumeSpecName: "client-ca") pod "a4f38956-d909-4b11-8617-fd9fdcc92e10" (UID: "a4f38956-d909-4b11-8617-fd9fdcc92e10"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.125910 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d47516f-05e5-4f96-bf5a-c4251af51b6b-utilities" (OuterVolumeSpecName: "utilities") pod "3d47516f-05e5-4f96-bf5a-c4251af51b6b" (UID: "3d47516f-05e5-4f96-bf5a-c4251af51b6b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.131385 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-config" (OuterVolumeSpecName: "config") pod "a4f38956-d909-4b11-8617-fd9fdcc92e10" (UID: "a4f38956-d909-4b11-8617-fd9fdcc92e10"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.132674 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d47516f-05e5-4f96-bf5a-c4251af51b6b-kube-api-access-7j57m" (OuterVolumeSpecName: "kube-api-access-7j57m") pod "3d47516f-05e5-4f96-bf5a-c4251af51b6b" (UID: "3d47516f-05e5-4f96-bf5a-c4251af51b6b"). InnerVolumeSpecName "kube-api-access-7j57m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.133094 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d47516f-05e5-4f96-bf5a-c4251af51b6b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d47516f-05e5-4f96-bf5a-c4251af51b6b" (UID: "3d47516f-05e5-4f96-bf5a-c4251af51b6b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.134564 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f38956-d909-4b11-8617-fd9fdcc92e10-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a4f38956-d909-4b11-8617-fd9fdcc92e10" (UID: "a4f38956-d909-4b11-8617-fd9fdcc92e10"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.134791 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e424f176-80e8-4029-a500-097e1d9e5b1e-kube-api-access-cc45l" (OuterVolumeSpecName: "kube-api-access-cc45l") pod "e424f176-80e8-4029-a500-097e1d9e5b1e" (UID: "e424f176-80e8-4029-a500-097e1d9e5b1e"). InnerVolumeSpecName "kube-api-access-cc45l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.134878 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ba9e41c-b01a-4d45-9272-24aca728f7bc-kube-api-access-879j2" (OuterVolumeSpecName: "kube-api-access-879j2") pod "7ba9e41c-b01a-4d45-9272-24aca728f7bc" (UID: "7ba9e41c-b01a-4d45-9272-24aca728f7bc"). InnerVolumeSpecName "kube-api-access-879j2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.138964 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4f38956-d909-4b11-8617-fd9fdcc92e10-kube-api-access-8cb9g" (OuterVolumeSpecName: "kube-api-access-8cb9g") pod "a4f38956-d909-4b11-8617-fd9fdcc92e10" (UID: "a4f38956-d909-4b11-8617-fd9fdcc92e10"). InnerVolumeSpecName "kube-api-access-8cb9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.233567 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7wxk\" (UniqueName: \"kubernetes.io/projected/695d677a-4519-4ff0-9c6a-cbc902b00ee5-kube-api-access-t7wxk\") pod \"695d677a-4519-4ff0-9c6a-cbc902b00ee5\" (UID: \"695d677a-4519-4ff0-9c6a-cbc902b00ee5\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.233643 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/695d677a-4519-4ff0-9c6a-cbc902b00ee5-catalog-content\") pod \"695d677a-4519-4ff0-9c6a-cbc902b00ee5\" (UID: \"695d677a-4519-4ff0-9c6a-cbc902b00ee5\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.233726 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/695d677a-4519-4ff0-9c6a-cbc902b00ee5-utilities\") pod \"695d677a-4519-4ff0-9c6a-cbc902b00ee5\" (UID: \"695d677a-4519-4ff0-9c6a-cbc902b00ee5\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.234151 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e424f176-80e8-4029-a500-097e1d9e5b1e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.234168 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e424f176-80e8-4029-a500-097e1d9e5b1e-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.234181 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ba9e41c-b01a-4d45-9272-24aca728f7bc-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.234193 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cb9g\" (UniqueName: \"kubernetes.io/projected/a4f38956-d909-4b11-8617-fd9fdcc92e10-kube-api-access-8cb9g\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.234207 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d47516f-05e5-4f96-bf5a-c4251af51b6b-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.234220 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-879j2\" (UniqueName: \"kubernetes.io/projected/7ba9e41c-b01a-4d45-9272-24aca728f7bc-kube-api-access-879j2\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.234231 4593 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.234245 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4f38956-d909-4b11-8617-fd9fdcc92e10-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.236528 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/695d677a-4519-4ff0-9c6a-cbc902b00ee5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "695d677a-4519-4ff0-9c6a-cbc902b00ee5" (UID: "695d677a-4519-4ff0-9c6a-cbc902b00ee5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.237178 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/695d677a-4519-4ff0-9c6a-cbc902b00ee5-utilities" (OuterVolumeSpecName: "utilities") pod "695d677a-4519-4ff0-9c6a-cbc902b00ee5" (UID: "695d677a-4519-4ff0-9c6a-cbc902b00ee5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.234257 4593 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.249735 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d47516f-05e5-4f96-bf5a-c4251af51b6b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.249749 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j57m\" (UniqueName: \"kubernetes.io/projected/3d47516f-05e5-4f96-bf5a-c4251af51b6b-kube-api-access-7j57m\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.249766 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.249962 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc45l\" (UniqueName: \"kubernetes.io/projected/e424f176-80e8-4029-a500-097e1d9e5b1e-kube-api-access-cc45l\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.249976 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ba9e41c-b01a-4d45-9272-24aca728f7bc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.250058 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/695d677a-4519-4ff0-9c6a-cbc902b00ee5-kube-api-access-t7wxk" (OuterVolumeSpecName: "kube-api-access-t7wxk") pod "695d677a-4519-4ff0-9c6a-cbc902b00ee5" (UID: "695d677a-4519-4ff0-9c6a-cbc902b00ee5"). InnerVolumeSpecName "kube-api-access-t7wxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.350799 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7wxk\" (UniqueName: \"kubernetes.io/projected/695d677a-4519-4ff0-9c6a-cbc902b00ee5-kube-api-access-t7wxk\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.350831 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/695d677a-4519-4ff0-9c6a-cbc902b00ee5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.350843 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/695d677a-4519-4ff0-9c6a-cbc902b00ee5-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.604677 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 11:03:00 crc kubenswrapper[4593]: W0129 11:03:00.614607 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1e47dc9d_9af5_4d14_b8f3_f227d93c792d.slice/crio-811cedf4e5e4f52e17c53349ccf7b03f1591201b305a753a4c76009127c216ee WatchSource:0}: Error finding container 811cedf4e5e4f52e17c53349ccf7b03f1591201b305a753a4c76009127c216ee: Status 404 returned error can't find the container with id 811cedf4e5e4f52e17c53349ccf7b03f1591201b305a753a4c76009127c216ee Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.616426 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.629027 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.654162 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-marketplace-trusted-ca\") pod \"0aa74baf-fde3-4dad-aef0-7b8b1ae90098\" (UID: \"0aa74baf-fde3-4dad-aef0-7b8b1ae90098\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.654215 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srcl6\" (UniqueName: \"kubernetes.io/projected/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-kube-api-access-srcl6\") pod \"0aa74baf-fde3-4dad-aef0-7b8b1ae90098\" (UID: \"0aa74baf-fde3-4dad-aef0-7b8b1ae90098\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.654250 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-marketplace-operator-metrics\") pod \"0aa74baf-fde3-4dad-aef0-7b8b1ae90098\" (UID: \"0aa74baf-fde3-4dad-aef0-7b8b1ae90098\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.657244 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "0aa74baf-fde3-4dad-aef0-7b8b1ae90098" (UID: "0aa74baf-fde3-4dad-aef0-7b8b1ae90098"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.659851 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "0aa74baf-fde3-4dad-aef0-7b8b1ae90098" (UID: "0aa74baf-fde3-4dad-aef0-7b8b1ae90098"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.667835 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-kube-api-access-srcl6" (OuterVolumeSpecName: "kube-api-access-srcl6") pod "0aa74baf-fde3-4dad-aef0-7b8b1ae90098" (UID: "0aa74baf-fde3-4dad-aef0-7b8b1ae90098"). InnerVolumeSpecName "kube-api-access-srcl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.671859 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-s2rlp"] Jan 29 11:03:00 crc kubenswrapper[4593]: W0129 11:03:00.674797 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc78186dc_c8e4_4018_8e50_f7fc0e719890.slice/crio-3e9832e7b98d23dae1b2fb65f8187f83a370fb734395c68300087fa85959095b WatchSource:0}: Error finding container 3e9832e7b98d23dae1b2fb65f8187f83a370fb734395c68300087fa85959095b: Status 404 returned error can't find the container with id 3e9832e7b98d23dae1b2fb65f8187f83a370fb734395c68300087fa85959095b Jan 29 11:03:00 crc kubenswrapper[4593]: W0129 11:03:00.677765 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a59fe58_c900_46ea_8ff2_8a7f49210dc3.slice/crio-55a51fe6ef01babc611d8975c87f095f629fd2120fbfeae87b8861d6aed6cbfe WatchSource:0}: Error finding container 55a51fe6ef01babc611d8975c87f095f629fd2120fbfeae87b8861d6aed6cbfe: Status 404 returned error can't find the container with id 55a51fe6ef01babc611d8975c87f095f629fd2120fbfeae87b8861d6aed6cbfe Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.748568 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v"] Jan 29 11:03:00 crc kubenswrapper[4593]: E0129 11:03:00.749026 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ba9e41c-b01a-4d45-9272-24aca728f7bc" containerName="extract-utilities" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.749043 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ba9e41c-b01a-4d45-9272-24aca728f7bc" containerName="extract-utilities" Jan 29 11:03:00 crc kubenswrapper[4593]: E0129 11:03:00.749052 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e424f176-80e8-4029-a500-097e1d9e5b1e" containerName="extract-utilities" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.749057 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="e424f176-80e8-4029-a500-097e1d9e5b1e" containerName="extract-utilities" Jan 29 11:03:00 crc kubenswrapper[4593]: E0129 11:03:00.749069 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aa74baf-fde3-4dad-aef0-7b8b1ae90098" containerName="marketplace-operator" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.749077 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aa74baf-fde3-4dad-aef0-7b8b1ae90098" containerName="marketplace-operator" Jan 29 11:03:00 crc kubenswrapper[4593]: E0129 11:03:00.749093 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4f38956-d909-4b11-8617-fd9fdcc92e10" containerName="controller-manager" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.749099 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4f38956-d909-4b11-8617-fd9fdcc92e10" containerName="controller-manager" Jan 29 11:03:00 crc kubenswrapper[4593]: E0129 11:03:00.749134 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d47516f-05e5-4f96-bf5a-c4251af51b6b" containerName="extract-utilities" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.749141 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d47516f-05e5-4f96-bf5a-c4251af51b6b" containerName="extract-utilities" Jan 29 11:03:00 crc kubenswrapper[4593]: E0129 11:03:00.749150 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="695d677a-4519-4ff0-9c6a-cbc902b00ee5" containerName="extract-utilities" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.749155 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="695d677a-4519-4ff0-9c6a-cbc902b00ee5" containerName="extract-utilities" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.749320 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="e424f176-80e8-4029-a500-097e1d9e5b1e" containerName="extract-utilities" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.749331 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d47516f-05e5-4f96-bf5a-c4251af51b6b" containerName="extract-utilities" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.749339 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ba9e41c-b01a-4d45-9272-24aca728f7bc" containerName="extract-utilities" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.749350 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4f38956-d909-4b11-8617-fd9fdcc92e10" containerName="controller-manager" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.749359 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="695d677a-4519-4ff0-9c6a-cbc902b00ee5" containerName="extract-utilities" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.749366 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aa74baf-fde3-4dad-aef0-7b8b1ae90098" containerName="marketplace-operator" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.749791 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.755431 4593 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.755499 4593 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.755516 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srcl6\" (UniqueName: \"kubernetes.io/projected/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-kube-api-access-srcl6\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.760619 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v"] Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.801330 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdz2v" event={"ID":"3d47516f-05e5-4f96-bf5a-c4251af51b6b","Type":"ContainerDied","Data":"96ef38f406756da164944fbca4b3b1aac366663320c1359747791a21ca1ed585"} Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.801678 4593 scope.go:117] "RemoveContainer" containerID="45fd11091e4829626417cd96b671777720a463c182e9d6f349c55edbbe7126c6" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.801794 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qdz2v" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.820791 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c78186dc-c8e4-4018-8e50-f7fc0e719890","Type":"ContainerStarted","Data":"3e9832e7b98d23dae1b2fb65f8187f83a370fb734395c68300087fa85959095b"} Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.823192 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jntfl"] Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.859214 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" event={"ID":"a4f38956-d909-4b11-8617-fd9fdcc92e10","Type":"ContainerDied","Data":"21f6b5d0c55de6d3ac91b432cc366d4adadbf13bd4e64cace71084fab1fad375"} Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.888918 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.902347 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" event={"ID":"f4378129-7124-43d0-a1a0-4085d0213d85","Type":"ContainerStarted","Data":"56d5157444e050b6f16a3cd3db852cdaa6435ef728d9605dbdd7a7adb3a64e51"} Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.904746 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1e47dc9d-9af5-4d14-b8f3-f227d93c792d","Type":"ContainerStarted","Data":"811cedf4e5e4f52e17c53349ccf7b03f1591201b305a753a4c76009127c216ee"} Jan 29 11:03:00 crc kubenswrapper[4593]: W0129 11:03:00.906188 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fc17831_117a_497d_bc13_b48ed5d95c90.slice/crio-9b647fad55bd50ea48e0f58ea14adbf61b46da149a2b2cc52b6c87e79960acd5 WatchSource:0}: Error finding container 9b647fad55bd50ea48e0f58ea14adbf61b46da149a2b2cc52b6c87e79960acd5: Status 404 returned error can't find the container with id 9b647fad55bd50ea48e0f58ea14adbf61b46da149a2b2cc52b6c87e79960acd5 Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.906779 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fgg5s" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.907534 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgg5s" event={"ID":"695d677a-4519-4ff0-9c6a-cbc902b00ee5","Type":"ContainerDied","Data":"73c935e8b979b7dc8ab160b89b0aa92943613ba07d23ca3617474e48390b50f1"} Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.922100 4593 scope.go:117] "RemoveContainer" containerID="98ead1bf2f822aebadbb849468a6ff6ad9ad4689b0f1f94453177be952a2be7c" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.924896 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tm7d7" event={"ID":"7ba9e41c-b01a-4d45-9272-24aca728f7bc","Type":"ContainerDied","Data":"f8947bf8603825421d7767efdebe3e5aa280154ddb0198dabfc109bfedbfab57"} Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.924985 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tm7d7" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.934444 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-s2rlp" event={"ID":"7a59fe58-c900-46ea-8ff2-8a7f49210dc3","Type":"ContainerStarted","Data":"55a51fe6ef01babc611d8975c87f095f629fd2120fbfeae87b8861d6aed6cbfe"} Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.937384 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-69z82" event={"ID":"e424f176-80e8-4029-a500-097e1d9e5b1e","Type":"ContainerDied","Data":"eef621985e16727acc46b16908219680b25248fd848eacdfa61bcd853a7c18ac"} Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.937497 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-69z82" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.940211 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" event={"ID":"0aa74baf-fde3-4dad-aef0-7b8b1ae90098","Type":"ContainerDied","Data":"b58de0681837cbb0473d918da193d9a2ae22eb516c0709127c7bbdd54537d3ef"} Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.940297 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.961102 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmmdp\" (UniqueName: \"kubernetes.io/projected/1b7bc172-8368-4c52-a739-34655c0e9686-kube-api-access-wmmdp\") pod \"controller-manager-5b5b564f5c-4lr6v\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.961153 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-proxy-ca-bundles\") pod \"controller-manager-5b5b564f5c-4lr6v\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.961181 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-config\") pod \"controller-manager-5b5b564f5c-4lr6v\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.961204 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-client-ca\") pod \"controller-manager-5b5b564f5c-4lr6v\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.961228 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b7bc172-8368-4c52-a739-34655c0e9686-serving-cert\") pod \"controller-manager-5b5b564f5c-4lr6v\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.985070 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b89555d5-2xdxb"] Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.992806 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6b89555d5-2xdxb"] Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.057275 4593 scope.go:117] "RemoveContainer" containerID="b9d5c7d4701eae15759c1c9b230bf47aaf13c122f4acea86bd71b0030082917d" Jan 29 11:03:01 crc kubenswrapper[4593]: E0129 11:03:01.057261 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 11:03:01 crc kubenswrapper[4593]: E0129 11:03:01.057424 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mwmr4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-w7gmb_openshift-marketplace(da7a9394-5c19-4a9e-9c6d-652b3ce08477): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 11:03:01 crc kubenswrapper[4593]: E0129 11:03:01.058531 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-w7gmb" podUID="da7a9394-5c19-4a9e-9c6d-652b3ce08477" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.062897 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmmdp\" (UniqueName: \"kubernetes.io/projected/1b7bc172-8368-4c52-a739-34655c0e9686-kube-api-access-wmmdp\") pod \"controller-manager-5b5b564f5c-4lr6v\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.062941 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-proxy-ca-bundles\") pod \"controller-manager-5b5b564f5c-4lr6v\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.062968 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-config\") pod \"controller-manager-5b5b564f5c-4lr6v\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.062989 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-client-ca\") pod \"controller-manager-5b5b564f5c-4lr6v\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.063021 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b7bc172-8368-4c52-a739-34655c0e9686-serving-cert\") pod \"controller-manager-5b5b564f5c-4lr6v\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.065576 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-proxy-ca-bundles\") pod \"controller-manager-5b5b564f5c-4lr6v\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.067243 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-client-ca\") pod \"controller-manager-5b5b564f5c-4lr6v\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.067488 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-config\") pod \"controller-manager-5b5b564f5c-4lr6v\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.068374 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b7bc172-8368-4c52-a739-34655c0e9686-serving-cert\") pod \"controller-manager-5b5b564f5c-4lr6v\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.087260 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4f38956-d909-4b11-8617-fd9fdcc92e10" path="/var/lib/kubelet/pods/a4f38956-d909-4b11-8617-fd9fdcc92e10/volumes" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.093379 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qdz2v"] Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.102517 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qdz2v"] Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.108019 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hw52m"] Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.114381 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hw52m"] Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.114944 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmmdp\" (UniqueName: \"kubernetes.io/projected/1b7bc172-8368-4c52-a739-34655c0e9686-kube-api-access-wmmdp\") pod \"controller-manager-5b5b564f5c-4lr6v\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.150589 4593 scope.go:117] "RemoveContainer" containerID="3d931ac31836dde066a45b4cd0a61a0a245f5279e75d2cf3230380f6b7a7f2dc" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.150977 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tm7d7"] Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.158587 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tm7d7"] Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.181164 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fgg5s"] Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.193262 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fgg5s"] Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.201466 4593 scope.go:117] "RemoveContainer" containerID="daec26b82fedd17793042a2543f04b2bffe9792c65bc9d01520e1daaec56238e" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.227988 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-69z82"] Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.232518 4593 scope.go:117] "RemoveContainer" containerID="134cb2e4c5ab4b63e76188908744960f17a0602be1969f5d2c5bfb52e5ef0868" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.239579 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-69z82"] Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.386970 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.660965 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v"] Jan 29 11:03:01 crc kubenswrapper[4593]: E0129 11:03:01.697873 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 11:03:01 crc kubenswrapper[4593]: E0129 11:03:01.701740 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwkcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-cqhd7_openshift-marketplace(d3be8312-dfdd-4359-b8c8-d9b8158fdab4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 11:03:01 crc kubenswrapper[4593]: E0129 11:03:01.702897 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-cqhd7" podUID="d3be8312-dfdd-4359-b8c8-d9b8158fdab4" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.823100 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kt56h"] Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.824612 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kt56h" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.826681 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.840312 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kt56h"] Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.950689 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-t7wn4" event={"ID":"fa5b3597-636e-4cf0-ad99-755378e23867","Type":"ContainerStarted","Data":"da6c305fc9b4c36ff1aec13c8062f2c0c0d8fc4e42de88cb5476d8e17fdd0fdc"} Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.951211 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-t7wn4" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.951284 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.951323 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.954951 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-s2rlp" event={"ID":"7a59fe58-c900-46ea-8ff2-8a7f49210dc3","Type":"ContainerStarted","Data":"1322e0b9140cfd25133d356253fbbffb5b8abfcdf97b1fb98dc5f672c80a5589"} Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.955174 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-s2rlp" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.959558 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" event={"ID":"0fc17831-117a-497d-bc13-b48ed5d95c90","Type":"ContainerStarted","Data":"ef53e07a0641f4e11c6001a1d0f9039045d18f8efa57411c7acf284a77d10665"} Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.959599 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" event={"ID":"0fc17831-117a-497d-bc13-b48ed5d95c90","Type":"ContainerStarted","Data":"9b647fad55bd50ea48e0f58ea14adbf61b46da149a2b2cc52b6c87e79960acd5"} Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.960428 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.964673 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-s2rlp" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.966406 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"8b86c4fe063da798a93b66c4ff5d4efee81766c3e10d5ae883a58f37ce9f5d50"} Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.972299 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c78186dc-c8e4-4018-8e50-f7fc0e719890","Type":"ContainerStarted","Data":"1944570fd0d711d5a3ddcb6c09ae1efbc4f659af6ced43239c4b6ab7e0c86a58"} Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.973214 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" event={"ID":"1b7bc172-8368-4c52-a739-34655c0e9686","Type":"ContainerStarted","Data":"a0d208891d18d712bd489561852a82f696e7d25c808617b7fe312d4e3430e177"} Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.984265 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0d1455d-ba27-48f0-be57-3d8e91a63853-utilities\") pod \"certified-operators-kt56h\" (UID: \"f0d1455d-ba27-48f0-be57-3d8e91a63853\") " pod="openshift-marketplace/certified-operators-kt56h" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.984313 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0d1455d-ba27-48f0-be57-3d8e91a63853-catalog-content\") pod \"certified-operators-kt56h\" (UID: \"f0d1455d-ba27-48f0-be57-3d8e91a63853\") " pod="openshift-marketplace/certified-operators-kt56h" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.984344 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjjtg\" (UniqueName: \"kubernetes.io/projected/f0d1455d-ba27-48f0-be57-3d8e91a63853-kube-api-access-qjjtg\") pod \"certified-operators-kt56h\" (UID: \"f0d1455d-ba27-48f0-be57-3d8e91a63853\") " pod="openshift-marketplace/certified-operators-kt56h" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.999606 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1e47dc9d-9af5-4d14-b8f3-f227d93c792d","Type":"ContainerStarted","Data":"698371c58f150386702001acf70ee1dd100d06b388a9c7e51ab1417419f484f6"} Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.999716 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" podUID="f4378129-7124-43d0-a1a0-4085d0213d85" containerName="route-controller-manager" containerID="cri-o://56d5157444e050b6f16a3cd3db852cdaa6435ef728d9605dbdd7a7adb3a64e51" gracePeriod=30 Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.000467 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.014112 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.086758 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0d1455d-ba27-48f0-be57-3d8e91a63853-utilities\") pod \"certified-operators-kt56h\" (UID: \"f0d1455d-ba27-48f0-be57-3d8e91a63853\") " pod="openshift-marketplace/certified-operators-kt56h" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.086818 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0d1455d-ba27-48f0-be57-3d8e91a63853-catalog-content\") pod \"certified-operators-kt56h\" (UID: \"f0d1455d-ba27-48f0-be57-3d8e91a63853\") " pod="openshift-marketplace/certified-operators-kt56h" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.086865 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjjtg\" (UniqueName: \"kubernetes.io/projected/f0d1455d-ba27-48f0-be57-3d8e91a63853-kube-api-access-qjjtg\") pod \"certified-operators-kt56h\" (UID: \"f0d1455d-ba27-48f0-be57-3d8e91a63853\") " pod="openshift-marketplace/certified-operators-kt56h" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.089383 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0d1455d-ba27-48f0-be57-3d8e91a63853-utilities\") pod \"certified-operators-kt56h\" (UID: \"f0d1455d-ba27-48f0-be57-3d8e91a63853\") " pod="openshift-marketplace/certified-operators-kt56h" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.089988 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0d1455d-ba27-48f0-be57-3d8e91a63853-catalog-content\") pod \"certified-operators-kt56h\" (UID: \"f0d1455d-ba27-48f0-be57-3d8e91a63853\") " pod="openshift-marketplace/certified-operators-kt56h" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.109689 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=48.109669326 podStartE2EDuration="48.109669326s" podCreationTimestamp="2026-01-29 11:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:03:02.108751631 +0000 UTC m=+247.981785822" watchObservedRunningTime="2026-01-29 11:03:02.109669326 +0000 UTC m=+247.982703517" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.128570 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjjtg\" (UniqueName: \"kubernetes.io/projected/f0d1455d-ba27-48f0-be57-3d8e91a63853-kube-api-access-qjjtg\") pod \"certified-operators-kt56h\" (UID: \"f0d1455d-ba27-48f0-be57-3d8e91a63853\") " pod="openshift-marketplace/certified-operators-kt56h" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.156866 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=43.156843485 podStartE2EDuration="43.156843485s" podCreationTimestamp="2026-01-29 11:02:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:03:02.149705036 +0000 UTC m=+248.022739247" watchObservedRunningTime="2026-01-29 11:03:02.156843485 +0000 UTC m=+248.029877676" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.167938 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kt56h" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.255833 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-s2rlp" podStartSLOduration=6.255817413 podStartE2EDuration="6.255817413s" podCreationTimestamp="2026-01-29 11:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:03:02.24536747 +0000 UTC m=+248.118401661" watchObservedRunningTime="2026-01-29 11:03:02.255817413 +0000 UTC m=+248.128851594" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.257749 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" podStartSLOduration=63.257743826 podStartE2EDuration="1m3.257743826s" podCreationTimestamp="2026-01-29 11:01:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:03:02.21352334 +0000 UTC m=+248.086557531" watchObservedRunningTime="2026-01-29 11:03:02.257743826 +0000 UTC m=+248.130778017" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.287360 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" podStartSLOduration=20.287343254 podStartE2EDuration="20.287343254s" podCreationTimestamp="2026-01-29 11:02:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:03:02.285163933 +0000 UTC m=+248.158198134" watchObservedRunningTime="2026-01-29 11:03:02.287343254 +0000 UTC m=+248.160377445" Jan 29 11:03:02 crc kubenswrapper[4593]: E0129 11:03:02.311069 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 11:03:02 crc kubenswrapper[4593]: E0129 11:03:02.311205 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-spqr2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-lf9gr_openshift-marketplace(9c000e16-ab7a-4247-99da-74ea62d94b89): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 11:03:02 crc kubenswrapper[4593]: E0129 11:03:02.314779 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-lf9gr" podUID="9c000e16-ab7a-4247-99da-74ea62d94b89" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.637582 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w7gmb" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.667949 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kt56h"] Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.696498 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cqhd7" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.701671 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da7a9394-5c19-4a9e-9c6d-652b3ce08477-catalog-content\") pod \"da7a9394-5c19-4a9e-9c6d-652b3ce08477\" (UID: \"da7a9394-5c19-4a9e-9c6d-652b3ce08477\") " Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.701756 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da7a9394-5c19-4a9e-9c6d-652b3ce08477-utilities\") pod \"da7a9394-5c19-4a9e-9c6d-652b3ce08477\" (UID: \"da7a9394-5c19-4a9e-9c6d-652b3ce08477\") " Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.701796 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwmr4\" (UniqueName: \"kubernetes.io/projected/da7a9394-5c19-4a9e-9c6d-652b3ce08477-kube-api-access-mwmr4\") pod \"da7a9394-5c19-4a9e-9c6d-652b3ce08477\" (UID: \"da7a9394-5c19-4a9e-9c6d-652b3ce08477\") " Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.703085 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da7a9394-5c19-4a9e-9c6d-652b3ce08477-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da7a9394-5c19-4a9e-9c6d-652b3ce08477" (UID: "da7a9394-5c19-4a9e-9c6d-652b3ce08477"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.704841 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da7a9394-5c19-4a9e-9c6d-652b3ce08477-utilities" (OuterVolumeSpecName: "utilities") pod "da7a9394-5c19-4a9e-9c6d-652b3ce08477" (UID: "da7a9394-5c19-4a9e-9c6d-652b3ce08477"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.714867 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da7a9394-5c19-4a9e-9c6d-652b3ce08477-kube-api-access-mwmr4" (OuterVolumeSpecName: "kube-api-access-mwmr4") pod "da7a9394-5c19-4a9e-9c6d-652b3ce08477" (UID: "da7a9394-5c19-4a9e-9c6d-652b3ce08477"). InnerVolumeSpecName "kube-api-access-mwmr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:02 crc kubenswrapper[4593]: E0129 11:03:02.776989 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 11:03:02 crc kubenswrapper[4593]: E0129 11:03:02.777269 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p5bhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-tvwft_openshift-marketplace(6ce733ca-85e0-43f9-a444-9703d600da63): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 11:03:02 crc kubenswrapper[4593]: E0129 11:03:02.778563 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-tvwft" podUID="6ce733ca-85e0-43f9-a444-9703d600da63" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.802664 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-catalog-content\") pod \"d3be8312-dfdd-4359-b8c8-d9b8158fdab4\" (UID: \"d3be8312-dfdd-4359-b8c8-d9b8158fdab4\") " Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.803010 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-utilities\") pod \"d3be8312-dfdd-4359-b8c8-d9b8158fdab4\" (UID: \"d3be8312-dfdd-4359-b8c8-d9b8158fdab4\") " Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.803830 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwkcz\" (UniqueName: \"kubernetes.io/projected/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-kube-api-access-vwkcz\") pod \"d3be8312-dfdd-4359-b8c8-d9b8158fdab4\" (UID: \"d3be8312-dfdd-4359-b8c8-d9b8158fdab4\") " Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.804487 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da7a9394-5c19-4a9e-9c6d-652b3ce08477-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.804614 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da7a9394-5c19-4a9e-9c6d-652b3ce08477-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.804711 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwmr4\" (UniqueName: \"kubernetes.io/projected/da7a9394-5c19-4a9e-9c6d-652b3ce08477-kube-api-access-mwmr4\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.803074 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3be8312-dfdd-4359-b8c8-d9b8158fdab4" (UID: "d3be8312-dfdd-4359-b8c8-d9b8158fdab4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.803785 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-utilities" (OuterVolumeSpecName: "utilities") pod "d3be8312-dfdd-4359-b8c8-d9b8158fdab4" (UID: "d3be8312-dfdd-4359-b8c8-d9b8158fdab4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.809172 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-kube-api-access-vwkcz" (OuterVolumeSpecName: "kube-api-access-vwkcz") pod "d3be8312-dfdd-4359-b8c8-d9b8158fdab4" (UID: "d3be8312-dfdd-4359-b8c8-d9b8158fdab4"). InnerVolumeSpecName "kube-api-access-vwkcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.906582 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.907006 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwkcz\" (UniqueName: \"kubernetes.io/projected/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-kube-api-access-vwkcz\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.907022 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.005493 4593 generic.go:334] "Generic (PLEG): container finished" podID="f4378129-7124-43d0-a1a0-4085d0213d85" containerID="56d5157444e050b6f16a3cd3db852cdaa6435ef728d9605dbdd7a7adb3a64e51" exitCode=0 Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.005576 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" event={"ID":"f4378129-7124-43d0-a1a0-4085d0213d85","Type":"ContainerDied","Data":"56d5157444e050b6f16a3cd3db852cdaa6435ef728d9605dbdd7a7adb3a64e51"} Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.005607 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" event={"ID":"f4378129-7124-43d0-a1a0-4085d0213d85","Type":"ContainerDied","Data":"4b801c5d5fcdc244600a5adf83fd979dc53a8e86763b672bd2bec0c0db5bb502"} Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.005619 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b801c5d5fcdc244600a5adf83fd979dc53a8e86763b672bd2bec0c0db5bb502" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.006857 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqhd7" event={"ID":"d3be8312-dfdd-4359-b8c8-d9b8158fdab4","Type":"ContainerDied","Data":"e3ed61cb166abee85a5cafd4f482b1fd984051495892cd7e58f5727be894ede4"} Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.006882 4593 scope.go:117] "RemoveContainer" containerID="6a9a45884a6f1cc5b501c7194e0aa2ef03b9fa8ba41ecbcea41cfa16d1d8fa17" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.007007 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cqhd7" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.015369 4593 generic.go:334] "Generic (PLEG): container finished" podID="1e47dc9d-9af5-4d14-b8f3-f227d93c792d" containerID="698371c58f150386702001acf70ee1dd100d06b388a9c7e51ab1417419f484f6" exitCode=0 Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.015430 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1e47dc9d-9af5-4d14-b8f3-f227d93c792d","Type":"ContainerDied","Data":"698371c58f150386702001acf70ee1dd100d06b388a9c7e51ab1417419f484f6"} Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.020876 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w7gmb" event={"ID":"da7a9394-5c19-4a9e-9c6d-652b3ce08477","Type":"ContainerDied","Data":"72aa027856b0ef03a57066a814eb40eddf13ecfd2d1c62024902a4d79111cf83"} Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.020973 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w7gmb" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.036785 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" event={"ID":"1b7bc172-8368-4c52-a739-34655c0e9686","Type":"ContainerStarted","Data":"efb497ce95c8b16f5f44e4fd898aa8797a4e7f63f9e2310f49fd9b1e6b2b5c23"} Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.037706 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.038408 4593 generic.go:334] "Generic (PLEG): container finished" podID="f0d1455d-ba27-48f0-be57-3d8e91a63853" containerID="da9803603a32c2b1706f9f56f2f7fd646c19157b252303218bfff0d2077cf305" exitCode=0 Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.038613 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.038651 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kt56h" event={"ID":"f0d1455d-ba27-48f0-be57-3d8e91a63853","Type":"ContainerDied","Data":"da9803603a32c2b1706f9f56f2f7fd646c19157b252303218bfff0d2077cf305"} Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.038667 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kt56h" event={"ID":"f0d1455d-ba27-48f0-be57-3d8e91a63853","Type":"ContainerStarted","Data":"c0f3efbce7e67af8cb25c4825c2bac1610293b1ae77dcc4e6435612734c04f47"} Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.045337 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.045373 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.045662 4593 scope.go:117] "RemoveContainer" containerID="1bf75ace58181af9f0cccb28ad84d5dd8c16c8b69d21079288e4029c1048cd89" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.051115 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.094150 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aa74baf-fde3-4dad-aef0-7b8b1ae90098" path="/var/lib/kubelet/pods/0aa74baf-fde3-4dad-aef0-7b8b1ae90098/volumes" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.094754 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d47516f-05e5-4f96-bf5a-c4251af51b6b" path="/var/lib/kubelet/pods/3d47516f-05e5-4f96-bf5a-c4251af51b6b/volumes" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.095301 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="695d677a-4519-4ff0-9c6a-cbc902b00ee5" path="/var/lib/kubelet/pods/695d677a-4519-4ff0-9c6a-cbc902b00ee5/volumes" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.097326 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ba9e41c-b01a-4d45-9272-24aca728f7bc" path="/var/lib/kubelet/pods/7ba9e41c-b01a-4d45-9272-24aca728f7bc/volumes" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.097836 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e424f176-80e8-4029-a500-097e1d9e5b1e" path="/var/lib/kubelet/pods/e424f176-80e8-4029-a500-097e1d9e5b1e/volumes" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.113876 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cqhd7"] Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.113990 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4378129-7124-43d0-a1a0-4085d0213d85-config\") pod \"f4378129-7124-43d0-a1a0-4085d0213d85\" (UID: \"f4378129-7124-43d0-a1a0-4085d0213d85\") " Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.114035 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4378129-7124-43d0-a1a0-4085d0213d85-serving-cert\") pod \"f4378129-7124-43d0-a1a0-4085d0213d85\" (UID: \"f4378129-7124-43d0-a1a0-4085d0213d85\") " Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.114164 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwlfj\" (UniqueName: \"kubernetes.io/projected/f4378129-7124-43d0-a1a0-4085d0213d85-kube-api-access-rwlfj\") pod \"f4378129-7124-43d0-a1a0-4085d0213d85\" (UID: \"f4378129-7124-43d0-a1a0-4085d0213d85\") " Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.114208 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4378129-7124-43d0-a1a0-4085d0213d85-client-ca\") pod \"f4378129-7124-43d0-a1a0-4085d0213d85\" (UID: \"f4378129-7124-43d0-a1a0-4085d0213d85\") " Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.116217 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4378129-7124-43d0-a1a0-4085d0213d85-client-ca" (OuterVolumeSpecName: "client-ca") pod "f4378129-7124-43d0-a1a0-4085d0213d85" (UID: "f4378129-7124-43d0-a1a0-4085d0213d85"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.117202 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4378129-7124-43d0-a1a0-4085d0213d85-config" (OuterVolumeSpecName: "config") pod "f4378129-7124-43d0-a1a0-4085d0213d85" (UID: "f4378129-7124-43d0-a1a0-4085d0213d85"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.125418 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4378129-7124-43d0-a1a0-4085d0213d85-kube-api-access-rwlfj" (OuterVolumeSpecName: "kube-api-access-rwlfj") pod "f4378129-7124-43d0-a1a0-4085d0213d85" (UID: "f4378129-7124-43d0-a1a0-4085d0213d85"). InnerVolumeSpecName "kube-api-access-rwlfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.125744 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4378129-7124-43d0-a1a0-4085d0213d85-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f4378129-7124-43d0-a1a0-4085d0213d85" (UID: "f4378129-7124-43d0-a1a0-4085d0213d85"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.131354 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cqhd7"] Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.158048 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" podStartSLOduration=4.15802795 podStartE2EDuration="4.15802795s" podCreationTimestamp="2026-01-29 11:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:03:03.146085836 +0000 UTC m=+249.019120047" watchObservedRunningTime="2026-01-29 11:03:03.15802795 +0000 UTC m=+249.031062141" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.217927 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwlfj\" (UniqueName: \"kubernetes.io/projected/f4378129-7124-43d0-a1a0-4085d0213d85-kube-api-access-rwlfj\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.217972 4593 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4378129-7124-43d0-a1a0-4085d0213d85-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.218006 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4378129-7124-43d0-a1a0-4085d0213d85-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.218015 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4378129-7124-43d0-a1a0-4085d0213d85-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.220282 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w7gmb"] Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.232844 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w7gmb"] Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.336914 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lf9gr" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.396261 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tvwft" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.420121 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c000e16-ab7a-4247-99da-74ea62d94b89-catalog-content\") pod \"9c000e16-ab7a-4247-99da-74ea62d94b89\" (UID: \"9c000e16-ab7a-4247-99da-74ea62d94b89\") " Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.420263 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ce733ca-85e0-43f9-a444-9703d600da63-catalog-content\") pod \"6ce733ca-85e0-43f9-a444-9703d600da63\" (UID: \"6ce733ca-85e0-43f9-a444-9703d600da63\") " Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.420291 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c000e16-ab7a-4247-99da-74ea62d94b89-utilities\") pod \"9c000e16-ab7a-4247-99da-74ea62d94b89\" (UID: \"9c000e16-ab7a-4247-99da-74ea62d94b89\") " Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.420308 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5bhb\" (UniqueName: \"kubernetes.io/projected/6ce733ca-85e0-43f9-a444-9703d600da63-kube-api-access-p5bhb\") pod \"6ce733ca-85e0-43f9-a444-9703d600da63\" (UID: \"6ce733ca-85e0-43f9-a444-9703d600da63\") " Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.420333 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spqr2\" (UniqueName: \"kubernetes.io/projected/9c000e16-ab7a-4247-99da-74ea62d94b89-kube-api-access-spqr2\") pod \"9c000e16-ab7a-4247-99da-74ea62d94b89\" (UID: \"9c000e16-ab7a-4247-99da-74ea62d94b89\") " Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.420351 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ce733ca-85e0-43f9-a444-9703d600da63-utilities\") pod \"6ce733ca-85e0-43f9-a444-9703d600da63\" (UID: \"6ce733ca-85e0-43f9-a444-9703d600da63\") " Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.421367 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ce733ca-85e0-43f9-a444-9703d600da63-utilities" (OuterVolumeSpecName: "utilities") pod "6ce733ca-85e0-43f9-a444-9703d600da63" (UID: "6ce733ca-85e0-43f9-a444-9703d600da63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.421442 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c000e16-ab7a-4247-99da-74ea62d94b89-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c000e16-ab7a-4247-99da-74ea62d94b89" (UID: "9c000e16-ab7a-4247-99da-74ea62d94b89"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.422045 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ce733ca-85e0-43f9-a444-9703d600da63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ce733ca-85e0-43f9-a444-9703d600da63" (UID: "6ce733ca-85e0-43f9-a444-9703d600da63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.422567 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c000e16-ab7a-4247-99da-74ea62d94b89-utilities" (OuterVolumeSpecName: "utilities") pod "9c000e16-ab7a-4247-99da-74ea62d94b89" (UID: "9c000e16-ab7a-4247-99da-74ea62d94b89"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.425811 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c000e16-ab7a-4247-99da-74ea62d94b89-kube-api-access-spqr2" (OuterVolumeSpecName: "kube-api-access-spqr2") pod "9c000e16-ab7a-4247-99da-74ea62d94b89" (UID: "9c000e16-ab7a-4247-99da-74ea62d94b89"). InnerVolumeSpecName "kube-api-access-spqr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.425942 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ce733ca-85e0-43f9-a444-9703d600da63-kube-api-access-p5bhb" (OuterVolumeSpecName: "kube-api-access-p5bhb") pod "6ce733ca-85e0-43f9-a444-9703d600da63" (UID: "6ce733ca-85e0-43f9-a444-9703d600da63"). InnerVolumeSpecName "kube-api-access-p5bhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.522050 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ce733ca-85e0-43f9-a444-9703d600da63-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.522082 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c000e16-ab7a-4247-99da-74ea62d94b89-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.522092 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5bhb\" (UniqueName: \"kubernetes.io/projected/6ce733ca-85e0-43f9-a444-9703d600da63-kube-api-access-p5bhb\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.522104 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spqr2\" (UniqueName: \"kubernetes.io/projected/9c000e16-ab7a-4247-99da-74ea62d94b89-kube-api-access-spqr2\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.522112 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ce733ca-85e0-43f9-a444-9703d600da63-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.522120 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c000e16-ab7a-4247-99da-74ea62d94b89-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.617443 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vbjtl"] Jan 29 11:03:03 crc kubenswrapper[4593]: E0129 11:03:03.617675 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4378129-7124-43d0-a1a0-4085d0213d85" containerName="route-controller-manager" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.617687 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4378129-7124-43d0-a1a0-4085d0213d85" containerName="route-controller-manager" Jan 29 11:03:03 crc kubenswrapper[4593]: E0129 11:03:03.617696 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ce733ca-85e0-43f9-a444-9703d600da63" containerName="extract-utilities" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.617702 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ce733ca-85e0-43f9-a444-9703d600da63" containerName="extract-utilities" Jan 29 11:03:03 crc kubenswrapper[4593]: E0129 11:03:03.617714 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da7a9394-5c19-4a9e-9c6d-652b3ce08477" containerName="extract-utilities" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.617720 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="da7a9394-5c19-4a9e-9c6d-652b3ce08477" containerName="extract-utilities" Jan 29 11:03:03 crc kubenswrapper[4593]: E0129 11:03:03.617728 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3be8312-dfdd-4359-b8c8-d9b8158fdab4" containerName="extract-utilities" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.617733 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3be8312-dfdd-4359-b8c8-d9b8158fdab4" containerName="extract-utilities" Jan 29 11:03:03 crc kubenswrapper[4593]: E0129 11:03:03.617748 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c000e16-ab7a-4247-99da-74ea62d94b89" containerName="extract-utilities" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.617755 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c000e16-ab7a-4247-99da-74ea62d94b89" containerName="extract-utilities" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.617840 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="da7a9394-5c19-4a9e-9c6d-652b3ce08477" containerName="extract-utilities" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.617849 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c000e16-ab7a-4247-99da-74ea62d94b89" containerName="extract-utilities" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.617861 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3be8312-dfdd-4359-b8c8-d9b8158fdab4" containerName="extract-utilities" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.617867 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ce733ca-85e0-43f9-a444-9703d600da63" containerName="extract-utilities" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.617877 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4378129-7124-43d0-a1a0-4085d0213d85" containerName="route-controller-manager" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.618564 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vbjtl" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.625725 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.640120 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vbjtl"] Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.725250 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/954251cb-5bea-456e-8d36-27eda2fe92d6-utilities\") pod \"redhat-operators-vbjtl\" (UID: \"954251cb-5bea-456e-8d36-27eda2fe92d6\") " pod="openshift-marketplace/redhat-operators-vbjtl" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.725330 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9lqx\" (UniqueName: \"kubernetes.io/projected/954251cb-5bea-456e-8d36-27eda2fe92d6-kube-api-access-z9lqx\") pod \"redhat-operators-vbjtl\" (UID: \"954251cb-5bea-456e-8d36-27eda2fe92d6\") " pod="openshift-marketplace/redhat-operators-vbjtl" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.725381 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/954251cb-5bea-456e-8d36-27eda2fe92d6-catalog-content\") pod \"redhat-operators-vbjtl\" (UID: \"954251cb-5bea-456e-8d36-27eda2fe92d6\") " pod="openshift-marketplace/redhat-operators-vbjtl" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.833146 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9lqx\" (UniqueName: \"kubernetes.io/projected/954251cb-5bea-456e-8d36-27eda2fe92d6-kube-api-access-z9lqx\") pod \"redhat-operators-vbjtl\" (UID: \"954251cb-5bea-456e-8d36-27eda2fe92d6\") " pod="openshift-marketplace/redhat-operators-vbjtl" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.833211 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/954251cb-5bea-456e-8d36-27eda2fe92d6-catalog-content\") pod \"redhat-operators-vbjtl\" (UID: \"954251cb-5bea-456e-8d36-27eda2fe92d6\") " pod="openshift-marketplace/redhat-operators-vbjtl" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.833242 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/954251cb-5bea-456e-8d36-27eda2fe92d6-utilities\") pod \"redhat-operators-vbjtl\" (UID: \"954251cb-5bea-456e-8d36-27eda2fe92d6\") " pod="openshift-marketplace/redhat-operators-vbjtl" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.833620 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/954251cb-5bea-456e-8d36-27eda2fe92d6-utilities\") pod \"redhat-operators-vbjtl\" (UID: \"954251cb-5bea-456e-8d36-27eda2fe92d6\") " pod="openshift-marketplace/redhat-operators-vbjtl" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.834117 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/954251cb-5bea-456e-8d36-27eda2fe92d6-catalog-content\") pod \"redhat-operators-vbjtl\" (UID: \"954251cb-5bea-456e-8d36-27eda2fe92d6\") " pod="openshift-marketplace/redhat-operators-vbjtl" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.853969 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9lqx\" (UniqueName: \"kubernetes.io/projected/954251cb-5bea-456e-8d36-27eda2fe92d6-kube-api-access-z9lqx\") pod \"redhat-operators-vbjtl\" (UID: \"954251cb-5bea-456e-8d36-27eda2fe92d6\") " pod="openshift-marketplace/redhat-operators-vbjtl" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.933194 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vbjtl" Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.051719 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lf9gr" event={"ID":"9c000e16-ab7a-4247-99da-74ea62d94b89","Type":"ContainerDied","Data":"e852468ceed93d241feec7b7965eaf616d41cdfd72c07bd89b3ac0aca81937b9"} Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.052042 4593 scope.go:117] "RemoveContainer" containerID="8e093f0363d31a3b87d3f9991c3433e34b34cbb53e07ea1c58a964d993b8be1a" Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.052150 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lf9gr" Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.063398 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tvwft" Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.063438 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvwft" event={"ID":"6ce733ca-85e0-43f9-a444-9703d600da63","Type":"ContainerDied","Data":"5a2bdd7e5cb75db5cc0318b63cd7ca3e8135afeaf117d553a67933c149ec867e"} Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.067052 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.071331 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.071378 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.119706 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lf9gr"] Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.128913 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lf9gr"] Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.140353 4593 scope.go:117] "RemoveContainer" containerID="ee4825fff37e0ca04b8b8e3c87e01fed5f500f91478778493b455fcf75dfd5d6" Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.145014 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb"] Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.148211 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb"] Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.193501 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvwft"] Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.197917 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvwft"] Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.371335 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 11:03:04 crc kubenswrapper[4593]: W0129 11:03:04.389599 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod954251cb_5bea_456e_8d36_27eda2fe92d6.slice/crio-0b1a9e6769d710e77157cd15808fc586479abe3e668b2515e7a6df15a8295d3a WatchSource:0}: Error finding container 0b1a9e6769d710e77157cd15808fc586479abe3e668b2515e7a6df15a8295d3a: Status 404 returned error can't find the container with id 0b1a9e6769d710e77157cd15808fc586479abe3e668b2515e7a6df15a8295d3a Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.392957 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vbjtl"] Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.561760 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e47dc9d-9af5-4d14-b8f3-f227d93c792d-kube-api-access\") pod \"1e47dc9d-9af5-4d14-b8f3-f227d93c792d\" (UID: \"1e47dc9d-9af5-4d14-b8f3-f227d93c792d\") " Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.562066 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1e47dc9d-9af5-4d14-b8f3-f227d93c792d-kubelet-dir\") pod \"1e47dc9d-9af5-4d14-b8f3-f227d93c792d\" (UID: \"1e47dc9d-9af5-4d14-b8f3-f227d93c792d\") " Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.562512 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e47dc9d-9af5-4d14-b8f3-f227d93c792d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1e47dc9d-9af5-4d14-b8f3-f227d93c792d" (UID: "1e47dc9d-9af5-4d14-b8f3-f227d93c792d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.570931 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e47dc9d-9af5-4d14-b8f3-f227d93c792d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1e47dc9d-9af5-4d14-b8f3-f227d93c792d" (UID: "1e47dc9d-9af5-4d14-b8f3-f227d93c792d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.663971 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e47dc9d-9af5-4d14-b8f3-f227d93c792d-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.664891 4593 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1e47dc9d-9af5-4d14-b8f3-f227d93c792d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:05 crc kubenswrapper[4593]: I0129 11:03:05.074069 4593 generic.go:334] "Generic (PLEG): container finished" podID="954251cb-5bea-456e-8d36-27eda2fe92d6" containerID="dc67b1b441df9db7285d242722d5600d9639c1caa2a14882031e742233b35a0f" exitCode=0 Jan 29 11:03:05 crc kubenswrapper[4593]: I0129 11:03:05.085443 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 11:03:05 crc kubenswrapper[4593]: I0129 11:03:05.086550 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ce733ca-85e0-43f9-a444-9703d600da63" path="/var/lib/kubelet/pods/6ce733ca-85e0-43f9-a444-9703d600da63/volumes" Jan 29 11:03:05 crc kubenswrapper[4593]: I0129 11:03:05.089004 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c000e16-ab7a-4247-99da-74ea62d94b89" path="/var/lib/kubelet/pods/9c000e16-ab7a-4247-99da-74ea62d94b89/volumes" Jan 29 11:03:05 crc kubenswrapper[4593]: I0129 11:03:05.089831 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3be8312-dfdd-4359-b8c8-d9b8158fdab4" path="/var/lib/kubelet/pods/d3be8312-dfdd-4359-b8c8-d9b8158fdab4/volumes" Jan 29 11:03:05 crc kubenswrapper[4593]: I0129 11:03:05.090559 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da7a9394-5c19-4a9e-9c6d-652b3ce08477" path="/var/lib/kubelet/pods/da7a9394-5c19-4a9e-9c6d-652b3ce08477/volumes" Jan 29 11:03:05 crc kubenswrapper[4593]: I0129 11:03:05.092319 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4378129-7124-43d0-a1a0-4085d0213d85" path="/var/lib/kubelet/pods/f4378129-7124-43d0-a1a0-4085d0213d85/volumes" Jan 29 11:03:05 crc kubenswrapper[4593]: I0129 11:03:05.093270 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vbjtl" event={"ID":"954251cb-5bea-456e-8d36-27eda2fe92d6","Type":"ContainerDied","Data":"dc67b1b441df9db7285d242722d5600d9639c1caa2a14882031e742233b35a0f"} Jan 29 11:03:05 crc kubenswrapper[4593]: I0129 11:03:05.093417 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vbjtl" event={"ID":"954251cb-5bea-456e-8d36-27eda2fe92d6","Type":"ContainerStarted","Data":"0b1a9e6769d710e77157cd15808fc586479abe3e668b2515e7a6df15a8295d3a"} Jan 29 11:03:05 crc kubenswrapper[4593]: I0129 11:03:05.093522 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1e47dc9d-9af5-4d14-b8f3-f227d93c792d","Type":"ContainerDied","Data":"811cedf4e5e4f52e17c53349ccf7b03f1591201b305a753a4c76009127c216ee"} Jan 29 11:03:05 crc kubenswrapper[4593]: I0129 11:03:05.093625 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="811cedf4e5e4f52e17c53349ccf7b03f1591201b305a753a4c76009127c216ee" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.220528 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-57v5l"] Jan 29 11:03:06 crc kubenswrapper[4593]: E0129 11:03:06.221016 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e47dc9d-9af5-4d14-b8f3-f227d93c792d" containerName="pruner" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.221027 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e47dc9d-9af5-4d14-b8f3-f227d93c792d" containerName="pruner" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.221123 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e47dc9d-9af5-4d14-b8f3-f227d93c792d" containerName="pruner" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.221818 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-57v5l" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.224886 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.240260 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-57v5l"] Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.286870 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ae70d27-10ec-4015-851d-d84aaf99d782-catalog-content\") pod \"community-operators-57v5l\" (UID: \"3ae70d27-10ec-4015-851d-d84aaf99d782\") " pod="openshift-marketplace/community-operators-57v5l" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.286948 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ae70d27-10ec-4015-851d-d84aaf99d782-utilities\") pod \"community-operators-57v5l\" (UID: \"3ae70d27-10ec-4015-851d-d84aaf99d782\") " pod="openshift-marketplace/community-operators-57v5l" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.286977 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whh4p\" (UniqueName: \"kubernetes.io/projected/3ae70d27-10ec-4015-851d-d84aaf99d782-kube-api-access-whh4p\") pod \"community-operators-57v5l\" (UID: \"3ae70d27-10ec-4015-851d-d84aaf99d782\") " pod="openshift-marketplace/community-operators-57v5l" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.387838 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ae70d27-10ec-4015-851d-d84aaf99d782-catalog-content\") pod \"community-operators-57v5l\" (UID: \"3ae70d27-10ec-4015-851d-d84aaf99d782\") " pod="openshift-marketplace/community-operators-57v5l" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.387924 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ae70d27-10ec-4015-851d-d84aaf99d782-utilities\") pod \"community-operators-57v5l\" (UID: \"3ae70d27-10ec-4015-851d-d84aaf99d782\") " pod="openshift-marketplace/community-operators-57v5l" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.387951 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whh4p\" (UniqueName: \"kubernetes.io/projected/3ae70d27-10ec-4015-851d-d84aaf99d782-kube-api-access-whh4p\") pod \"community-operators-57v5l\" (UID: \"3ae70d27-10ec-4015-851d-d84aaf99d782\") " pod="openshift-marketplace/community-operators-57v5l" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.388339 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ae70d27-10ec-4015-851d-d84aaf99d782-catalog-content\") pod \"community-operators-57v5l\" (UID: \"3ae70d27-10ec-4015-851d-d84aaf99d782\") " pod="openshift-marketplace/community-operators-57v5l" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.388611 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ae70d27-10ec-4015-851d-d84aaf99d782-utilities\") pod \"community-operators-57v5l\" (UID: \"3ae70d27-10ec-4015-851d-d84aaf99d782\") " pod="openshift-marketplace/community-operators-57v5l" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.411109 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whh4p\" (UniqueName: \"kubernetes.io/projected/3ae70d27-10ec-4015-851d-d84aaf99d782-kube-api-access-whh4p\") pod \"community-operators-57v5l\" (UID: \"3ae70d27-10ec-4015-851d-d84aaf99d782\") " pod="openshift-marketplace/community-operators-57v5l" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.534993 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-57v5l" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.787595 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b"] Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.788706 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.791326 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.791644 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.791709 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.796187 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b"] Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.799986 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.803768 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.804066 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.953315 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0853e6a7-14da-4065-b7e5-4090e64c8335-config\") pod \"route-controller-manager-58bf7649d7-2zw9b\" (UID: \"0853e6a7-14da-4065-b7e5-4090e64c8335\") " pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.953713 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfsgg\" (UniqueName: \"kubernetes.io/projected/0853e6a7-14da-4065-b7e5-4090e64c8335-kube-api-access-gfsgg\") pod \"route-controller-manager-58bf7649d7-2zw9b\" (UID: \"0853e6a7-14da-4065-b7e5-4090e64c8335\") " pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.953764 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0853e6a7-14da-4065-b7e5-4090e64c8335-client-ca\") pod \"route-controller-manager-58bf7649d7-2zw9b\" (UID: \"0853e6a7-14da-4065-b7e5-4090e64c8335\") " pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.953785 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0853e6a7-14da-4065-b7e5-4090e64c8335-serving-cert\") pod \"route-controller-manager-58bf7649d7-2zw9b\" (UID: \"0853e6a7-14da-4065-b7e5-4090e64c8335\") " pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.054930 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0853e6a7-14da-4065-b7e5-4090e64c8335-config\") pod \"route-controller-manager-58bf7649d7-2zw9b\" (UID: \"0853e6a7-14da-4065-b7e5-4090e64c8335\") " pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.055000 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfsgg\" (UniqueName: \"kubernetes.io/projected/0853e6a7-14da-4065-b7e5-4090e64c8335-kube-api-access-gfsgg\") pod \"route-controller-manager-58bf7649d7-2zw9b\" (UID: \"0853e6a7-14da-4065-b7e5-4090e64c8335\") " pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.055053 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0853e6a7-14da-4065-b7e5-4090e64c8335-client-ca\") pod \"route-controller-manager-58bf7649d7-2zw9b\" (UID: \"0853e6a7-14da-4065-b7e5-4090e64c8335\") " pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.055078 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0853e6a7-14da-4065-b7e5-4090e64c8335-serving-cert\") pod \"route-controller-manager-58bf7649d7-2zw9b\" (UID: \"0853e6a7-14da-4065-b7e5-4090e64c8335\") " pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.056193 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0853e6a7-14da-4065-b7e5-4090e64c8335-client-ca\") pod \"route-controller-manager-58bf7649d7-2zw9b\" (UID: \"0853e6a7-14da-4065-b7e5-4090e64c8335\") " pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.056586 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0853e6a7-14da-4065-b7e5-4090e64c8335-config\") pod \"route-controller-manager-58bf7649d7-2zw9b\" (UID: \"0853e6a7-14da-4065-b7e5-4090e64c8335\") " pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.062905 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0853e6a7-14da-4065-b7e5-4090e64c8335-serving-cert\") pod \"route-controller-manager-58bf7649d7-2zw9b\" (UID: \"0853e6a7-14da-4065-b7e5-4090e64c8335\") " pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.073306 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfsgg\" (UniqueName: \"kubernetes.io/projected/0853e6a7-14da-4065-b7e5-4090e64c8335-kube-api-access-gfsgg\") pod \"route-controller-manager-58bf7649d7-2zw9b\" (UID: \"0853e6a7-14da-4065-b7e5-4090e64c8335\") " pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.097031 4593 generic.go:334] "Generic (PLEG): container finished" podID="f0d1455d-ba27-48f0-be57-3d8e91a63853" containerID="90a3c8fe6e3b3c67889ebc6d5bc0e4f5101fb783bf937cb0cff6d2c277cde15e" exitCode=0 Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.097085 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kt56h" event={"ID":"f0d1455d-ba27-48f0-be57-3d8e91a63853","Type":"ContainerDied","Data":"90a3c8fe6e3b3c67889ebc6d5bc0e4f5101fb783bf937cb0cff6d2c277cde15e"} Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.113133 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-57v5l"] Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.118989 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:07 crc kubenswrapper[4593]: W0129 11:03:07.122423 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ae70d27_10ec_4015_851d_d84aaf99d782.slice/crio-debeaa2cc637dd40f30edffd853e193912cfa521951ee9027867cd02cd495805 WatchSource:0}: Error finding container debeaa2cc637dd40f30edffd853e193912cfa521951ee9027867cd02cd495805: Status 404 returned error can't find the container with id debeaa2cc637dd40f30edffd853e193912cfa521951ee9027867cd02cd495805 Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.546484 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b"] Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.767979 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.768290 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.768053 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.768338 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.103945 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" event={"ID":"0853e6a7-14da-4065-b7e5-4090e64c8335","Type":"ContainerStarted","Data":"bc65351199a792aef25e18639b762df27be08050c27757be6a902bd41f818ecb"} Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.104000 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" event={"ID":"0853e6a7-14da-4065-b7e5-4090e64c8335","Type":"ContainerStarted","Data":"21ade5a578e280b9b59a20196ece09521420534fe714ba11867382d7f37334ad"} Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.105193 4593 generic.go:334] "Generic (PLEG): container finished" podID="3ae70d27-10ec-4015-851d-d84aaf99d782" containerID="a4d7fe7f20fdaffdd69fd8fa9fd3f50b3a1065337b6fe8179e47e8a996045175" exitCode=0 Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.105226 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-57v5l" event={"ID":"3ae70d27-10ec-4015-851d-d84aaf99d782","Type":"ContainerDied","Data":"a4d7fe7f20fdaffdd69fd8fa9fd3f50b3a1065337b6fe8179e47e8a996045175"} Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.105246 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-57v5l" event={"ID":"3ae70d27-10ec-4015-851d-d84aaf99d782","Type":"ContainerStarted","Data":"debeaa2cc637dd40f30edffd853e193912cfa521951ee9027867cd02cd495805"} Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.621414 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v2f96"] Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.622511 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v2f96" Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.625775 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.676044 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v2f96"] Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.790029 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69a313ce-b443-4080-9eea-bde0c61dc59d-catalog-content\") pod \"redhat-marketplace-v2f96\" (UID: \"69a313ce-b443-4080-9eea-bde0c61dc59d\") " pod="openshift-marketplace/redhat-marketplace-v2f96" Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.790166 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs8gv\" (UniqueName: \"kubernetes.io/projected/69a313ce-b443-4080-9eea-bde0c61dc59d-kube-api-access-bs8gv\") pod \"redhat-marketplace-v2f96\" (UID: \"69a313ce-b443-4080-9eea-bde0c61dc59d\") " pod="openshift-marketplace/redhat-marketplace-v2f96" Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.790193 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69a313ce-b443-4080-9eea-bde0c61dc59d-utilities\") pod \"redhat-marketplace-v2f96\" (UID: \"69a313ce-b443-4080-9eea-bde0c61dc59d\") " pod="openshift-marketplace/redhat-marketplace-v2f96" Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.891813 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs8gv\" (UniqueName: \"kubernetes.io/projected/69a313ce-b443-4080-9eea-bde0c61dc59d-kube-api-access-bs8gv\") pod \"redhat-marketplace-v2f96\" (UID: \"69a313ce-b443-4080-9eea-bde0c61dc59d\") " pod="openshift-marketplace/redhat-marketplace-v2f96" Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.891862 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69a313ce-b443-4080-9eea-bde0c61dc59d-utilities\") pod \"redhat-marketplace-v2f96\" (UID: \"69a313ce-b443-4080-9eea-bde0c61dc59d\") " pod="openshift-marketplace/redhat-marketplace-v2f96" Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.891893 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69a313ce-b443-4080-9eea-bde0c61dc59d-catalog-content\") pod \"redhat-marketplace-v2f96\" (UID: \"69a313ce-b443-4080-9eea-bde0c61dc59d\") " pod="openshift-marketplace/redhat-marketplace-v2f96" Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.892369 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69a313ce-b443-4080-9eea-bde0c61dc59d-catalog-content\") pod \"redhat-marketplace-v2f96\" (UID: \"69a313ce-b443-4080-9eea-bde0c61dc59d\") " pod="openshift-marketplace/redhat-marketplace-v2f96" Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.892705 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69a313ce-b443-4080-9eea-bde0c61dc59d-utilities\") pod \"redhat-marketplace-v2f96\" (UID: \"69a313ce-b443-4080-9eea-bde0c61dc59d\") " pod="openshift-marketplace/redhat-marketplace-v2f96" Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.911939 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs8gv\" (UniqueName: \"kubernetes.io/projected/69a313ce-b443-4080-9eea-bde0c61dc59d-kube-api-access-bs8gv\") pod \"redhat-marketplace-v2f96\" (UID: \"69a313ce-b443-4080-9eea-bde0c61dc59d\") " pod="openshift-marketplace/redhat-marketplace-v2f96" Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.937777 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v2f96" Jan 29 11:03:09 crc kubenswrapper[4593]: I0129 11:03:09.125878 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:09 crc kubenswrapper[4593]: I0129 11:03:09.145702 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:09 crc kubenswrapper[4593]: I0129 11:03:09.208577 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" podStartSLOduration=10.208551221 podStartE2EDuration="10.208551221s" podCreationTimestamp="2026-01-29 11:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:03:09.204545909 +0000 UTC m=+255.077580100" watchObservedRunningTime="2026-01-29 11:03:09.208551221 +0000 UTC m=+255.081585412" Jan 29 11:03:09 crc kubenswrapper[4593]: I0129 11:03:09.563214 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v2f96"] Jan 29 11:03:12 crc kubenswrapper[4593]: W0129 11:03:12.044060 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69a313ce_b443_4080_9eea_bde0c61dc59d.slice/crio-a09bdfa43709fff979414ad3e2c68f9d117cc2abf495bde82517af8fdbd23fd2 WatchSource:0}: Error finding container a09bdfa43709fff979414ad3e2c68f9d117cc2abf495bde82517af8fdbd23fd2: Status 404 returned error can't find the container with id a09bdfa43709fff979414ad3e2c68f9d117cc2abf495bde82517af8fdbd23fd2 Jan 29 11:03:12 crc kubenswrapper[4593]: I0129 11:03:12.221823 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2f96" event={"ID":"69a313ce-b443-4080-9eea-bde0c61dc59d","Type":"ContainerStarted","Data":"a09bdfa43709fff979414ad3e2c68f9d117cc2abf495bde82517af8fdbd23fd2"} Jan 29 11:03:17 crc kubenswrapper[4593]: I0129 11:03:17.251044 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-57v5l" event={"ID":"3ae70d27-10ec-4015-851d-d84aaf99d782","Type":"ContainerStarted","Data":"b01cf87c464002d003adad1df6433bb907f431ed214d1bcde8a84c6da9246667"} Jan 29 11:03:17 crc kubenswrapper[4593]: I0129 11:03:17.253179 4593 generic.go:334] "Generic (PLEG): container finished" podID="69a313ce-b443-4080-9eea-bde0c61dc59d" containerID="fed25ad9139b9cfcd6fb12417440a8ebfc2bb9d954511884a4747cc4e7b08432" exitCode=0 Jan 29 11:03:17 crc kubenswrapper[4593]: I0129 11:03:17.253246 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2f96" event={"ID":"69a313ce-b443-4080-9eea-bde0c61dc59d","Type":"ContainerDied","Data":"fed25ad9139b9cfcd6fb12417440a8ebfc2bb9d954511884a4747cc4e7b08432"} Jan 29 11:03:17 crc kubenswrapper[4593]: I0129 11:03:17.256620 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vbjtl" event={"ID":"954251cb-5bea-456e-8d36-27eda2fe92d6","Type":"ContainerStarted","Data":"0c86ba93f1ff030bcfb900d11758b1232ffa6e02adae8fe5018449d1c26ee3a9"} Jan 29 11:03:17 crc kubenswrapper[4593]: I0129 11:03:17.259778 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" podUID="e544204e-7186-4a22-a6bf-79a5101af4b6" containerName="oauth-openshift" containerID="cri-o://0951708a49a18c39b5089e8701a82e83976042f4ab61f945ea72ff61a2c3931c" gracePeriod=15 Jan 29 11:03:17 crc kubenswrapper[4593]: I0129 11:03:17.271703 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kt56h" event={"ID":"f0d1455d-ba27-48f0-be57-3d8e91a63853","Type":"ContainerStarted","Data":"d3ae6c551b97e3c2a1aa5587184f94da8da17ffe874a2ca331b108bdd06a45e0"} Jan 29 11:03:17 crc kubenswrapper[4593]: I0129 11:03:17.348395 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kt56h" podStartSLOduration=3.41084184 podStartE2EDuration="16.348376242s" podCreationTimestamp="2026-01-29 11:03:01 +0000 UTC" firstStartedPulling="2026-01-29 11:03:03.052985232 +0000 UTC m=+248.926019423" lastFinishedPulling="2026-01-29 11:03:15.990519594 +0000 UTC m=+261.863553825" observedRunningTime="2026-01-29 11:03:17.345888104 +0000 UTC m=+263.218922305" watchObservedRunningTime="2026-01-29 11:03:17.348376242 +0000 UTC m=+263.221410433" Jan 29 11:03:17 crc kubenswrapper[4593]: I0129 11:03:17.413284 4593 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-ftchp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Jan 29 11:03:17 crc kubenswrapper[4593]: I0129 11:03:17.413332 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" podUID="e544204e-7186-4a22-a6bf-79a5101af4b6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Jan 29 11:03:17 crc kubenswrapper[4593]: I0129 11:03:17.767467 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:03:17 crc kubenswrapper[4593]: I0129 11:03:17.767520 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:03:17 crc kubenswrapper[4593]: I0129 11:03:17.767813 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:03:17 crc kubenswrapper[4593]: I0129 11:03:17.767948 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:03:18 crc kubenswrapper[4593]: I0129 11:03:18.279193 4593 generic.go:334] "Generic (PLEG): container finished" podID="3ae70d27-10ec-4015-851d-d84aaf99d782" containerID="b01cf87c464002d003adad1df6433bb907f431ed214d1bcde8a84c6da9246667" exitCode=0 Jan 29 11:03:18 crc kubenswrapper[4593]: I0129 11:03:18.280476 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-57v5l" event={"ID":"3ae70d27-10ec-4015-851d-d84aaf99d782","Type":"ContainerDied","Data":"b01cf87c464002d003adad1df6433bb907f431ed214d1bcde8a84c6da9246667"} Jan 29 11:03:19 crc kubenswrapper[4593]: I0129 11:03:19.262181 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v"] Jan 29 11:03:19 crc kubenswrapper[4593]: I0129 11:03:19.262383 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" podUID="1b7bc172-8368-4c52-a739-34655c0e9686" containerName="controller-manager" containerID="cri-o://efb497ce95c8b16f5f44e4fd898aa8797a4e7f63f9e2310f49fd9b1e6b2b5c23" gracePeriod=30 Jan 29 11:03:19 crc kubenswrapper[4593]: I0129 11:03:19.310751 4593 generic.go:334] "Generic (PLEG): container finished" podID="954251cb-5bea-456e-8d36-27eda2fe92d6" containerID="0c86ba93f1ff030bcfb900d11758b1232ffa6e02adae8fe5018449d1c26ee3a9" exitCode=0 Jan 29 11:03:19 crc kubenswrapper[4593]: I0129 11:03:19.311523 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vbjtl" event={"ID":"954251cb-5bea-456e-8d36-27eda2fe92d6","Type":"ContainerDied","Data":"0c86ba93f1ff030bcfb900d11758b1232ffa6e02adae8fe5018449d1c26ee3a9"} Jan 29 11:03:19 crc kubenswrapper[4593]: I0129 11:03:19.316746 4593 generic.go:334] "Generic (PLEG): container finished" podID="e544204e-7186-4a22-a6bf-79a5101af4b6" containerID="0951708a49a18c39b5089e8701a82e83976042f4ab61f945ea72ff61a2c3931c" exitCode=0 Jan 29 11:03:19 crc kubenswrapper[4593]: I0129 11:03:19.316870 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" event={"ID":"e544204e-7186-4a22-a6bf-79a5101af4b6","Type":"ContainerDied","Data":"0951708a49a18c39b5089e8701a82e83976042f4ab61f945ea72ff61a2c3931c"} Jan 29 11:03:19 crc kubenswrapper[4593]: I0129 11:03:19.373778 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b"] Jan 29 11:03:19 crc kubenswrapper[4593]: I0129 11:03:19.373971 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" podUID="0853e6a7-14da-4065-b7e5-4090e64c8335" containerName="route-controller-manager" containerID="cri-o://bc65351199a792aef25e18639b762df27be08050c27757be6a902bd41f818ecb" gracePeriod=30 Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.241759 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.271852 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-75b7b58d79-s2j2l"] Jan 29 11:03:20 crc kubenswrapper[4593]: E0129 11:03:20.272094 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e544204e-7186-4a22-a6bf-79a5101af4b6" containerName="oauth-openshift" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.272108 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="e544204e-7186-4a22-a6bf-79a5101af4b6" containerName="oauth-openshift" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.272225 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="e544204e-7186-4a22-a6bf-79a5101af4b6" containerName="oauth-openshift" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.272666 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.293472 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-75b7b58d79-s2j2l"] Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.336669 4593 generic.go:334] "Generic (PLEG): container finished" podID="0853e6a7-14da-4065-b7e5-4090e64c8335" containerID="bc65351199a792aef25e18639b762df27be08050c27757be6a902bd41f818ecb" exitCode=0 Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.336734 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" event={"ID":"0853e6a7-14da-4065-b7e5-4090e64c8335","Type":"ContainerDied","Data":"bc65351199a792aef25e18639b762df27be08050c27757be6a902bd41f818ecb"} Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.340248 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.340242 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" event={"ID":"e544204e-7186-4a22-a6bf-79a5101af4b6","Type":"ContainerDied","Data":"0d7cf3673b86763198bedf6c07542fda69ead3075260207ea60dca64f8d8ae64"} Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.340439 4593 scope.go:117] "RemoveContainer" containerID="0951708a49a18c39b5089e8701a82e83976042f4ab61f945ea72ff61a2c3931c" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.342191 4593 generic.go:334] "Generic (PLEG): container finished" podID="1b7bc172-8368-4c52-a739-34655c0e9686" containerID="efb497ce95c8b16f5f44e4fd898aa8797a4e7f63f9e2310f49fd9b1e6b2b5c23" exitCode=0 Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.342230 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" event={"ID":"1b7bc172-8368-4c52-a739-34655c0e9686","Type":"ContainerDied","Data":"efb497ce95c8b16f5f44e4fd898aa8797a4e7f63f9e2310f49fd9b1e6b2b5c23"} Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.429572 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-idp-0-file-data\") pod \"e544204e-7186-4a22-a6bf-79a5101af4b6\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.429646 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-provider-selection\") pod \"e544204e-7186-4a22-a6bf-79a5101af4b6\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.429687 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e544204e-7186-4a22-a6bf-79a5101af4b6-audit-dir\") pod \"e544204e-7186-4a22-a6bf-79a5101af4b6\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.429723 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-ocp-branding-template\") pod \"e544204e-7186-4a22-a6bf-79a5101af4b6\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.429762 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-serving-cert\") pod \"e544204e-7186-4a22-a6bf-79a5101af4b6\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.429786 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-session\") pod \"e544204e-7186-4a22-a6bf-79a5101af4b6\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.429800 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e544204e-7186-4a22-a6bf-79a5101af4b6-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e544204e-7186-4a22-a6bf-79a5101af4b6" (UID: "e544204e-7186-4a22-a6bf-79a5101af4b6"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.429813 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-router-certs\") pod \"e544204e-7186-4a22-a6bf-79a5101af4b6\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.429867 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-trusted-ca-bundle\") pod \"e544204e-7186-4a22-a6bf-79a5101af4b6\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.429904 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-cliconfig\") pod \"e544204e-7186-4a22-a6bf-79a5101af4b6\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.429951 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-audit-policies\") pod \"e544204e-7186-4a22-a6bf-79a5101af4b6\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430017 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q92mj\" (UniqueName: \"kubernetes.io/projected/e544204e-7186-4a22-a6bf-79a5101af4b6-kube-api-access-q92mj\") pod \"e544204e-7186-4a22-a6bf-79a5101af4b6\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430042 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-login\") pod \"e544204e-7186-4a22-a6bf-79a5101af4b6\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430081 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-error\") pod \"e544204e-7186-4a22-a6bf-79a5101af4b6\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430102 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-service-ca\") pod \"e544204e-7186-4a22-a6bf-79a5101af4b6\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430259 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430301 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-cliconfig\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430323 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-user-template-login\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430346 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-session\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430376 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-service-ca\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430420 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-router-certs\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430448 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-user-template-error\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430481 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzxws\" (UniqueName: \"kubernetes.io/projected/7fa6519b-42fa-4af8-a739-e77110dff723-kube-api-access-wzxws\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430505 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430535 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6519b-42fa-4af8-a739-e77110dff723-audit-dir\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430556 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7fa6519b-42fa-4af8-a739-e77110dff723-audit-policies\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430573 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-serving-cert\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430608 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430656 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430709 4593 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e544204e-7186-4a22-a6bf-79a5101af4b6-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.431201 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "e544204e-7186-4a22-a6bf-79a5101af4b6" (UID: "e544204e-7186-4a22-a6bf-79a5101af4b6"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.431614 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "e544204e-7186-4a22-a6bf-79a5101af4b6" (UID: "e544204e-7186-4a22-a6bf-79a5101af4b6"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.432072 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "e544204e-7186-4a22-a6bf-79a5101af4b6" (UID: "e544204e-7186-4a22-a6bf-79a5101af4b6"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.434469 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "e544204e-7186-4a22-a6bf-79a5101af4b6" (UID: "e544204e-7186-4a22-a6bf-79a5101af4b6"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.435002 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "e544204e-7186-4a22-a6bf-79a5101af4b6" (UID: "e544204e-7186-4a22-a6bf-79a5101af4b6"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.435625 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "e544204e-7186-4a22-a6bf-79a5101af4b6" (UID: "e544204e-7186-4a22-a6bf-79a5101af4b6"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.435946 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "e544204e-7186-4a22-a6bf-79a5101af4b6" (UID: "e544204e-7186-4a22-a6bf-79a5101af4b6"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.436400 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "e544204e-7186-4a22-a6bf-79a5101af4b6" (UID: "e544204e-7186-4a22-a6bf-79a5101af4b6"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.436745 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "e544204e-7186-4a22-a6bf-79a5101af4b6" (UID: "e544204e-7186-4a22-a6bf-79a5101af4b6"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.437069 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "e544204e-7186-4a22-a6bf-79a5101af4b6" (UID: "e544204e-7186-4a22-a6bf-79a5101af4b6"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.437841 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "e544204e-7186-4a22-a6bf-79a5101af4b6" (UID: "e544204e-7186-4a22-a6bf-79a5101af4b6"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.439620 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "e544204e-7186-4a22-a6bf-79a5101af4b6" (UID: "e544204e-7186-4a22-a6bf-79a5101af4b6"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.441235 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e544204e-7186-4a22-a6bf-79a5101af4b6-kube-api-access-q92mj" (OuterVolumeSpecName: "kube-api-access-q92mj") pod "e544204e-7186-4a22-a6bf-79a5101af4b6" (UID: "e544204e-7186-4a22-a6bf-79a5101af4b6"). InnerVolumeSpecName "kube-api-access-q92mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.532113 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.532628 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-cliconfig\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.532961 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-user-template-login\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.533804 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-session\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.534133 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-service-ca\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.534044 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-cliconfig\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.534431 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-router-certs\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.534862 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-user-template-error\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.535002 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-service-ca\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.535328 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzxws\" (UniqueName: \"kubernetes.io/projected/7fa6519b-42fa-4af8-a739-e77110dff723-kube-api-access-wzxws\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.535851 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.536189 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.536693 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6519b-42fa-4af8-a739-e77110dff723-audit-dir\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.537056 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7fa6519b-42fa-4af8-a739-e77110dff723-audit-policies\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.537360 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-serving-cert\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.537596 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.538997 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.545314 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.545349 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.545364 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.545382 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.545396 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.545410 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.545427 4593 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.545443 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q92mj\" (UniqueName: \"kubernetes.io/projected/e544204e-7186-4a22-a6bf-79a5101af4b6-kube-api-access-q92mj\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.545457 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.545470 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.545486 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.545500 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.545513 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.545136 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-user-template-login\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.538078 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-session\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.536799 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6519b-42fa-4af8-a739-e77110dff723-audit-dir\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.538398 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7fa6519b-42fa-4af8-a739-e77110dff723-audit-policies\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.539042 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-user-template-error\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.537718 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-router-certs\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.540115 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.541127 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-serving-cert\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.542997 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.544173 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.561271 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzxws\" (UniqueName: \"kubernetes.io/projected/7fa6519b-42fa-4af8-a739-e77110dff723-kube-api-access-wzxws\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.585695 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.681994 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ftchp"] Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.687067 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ftchp"] Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.994618 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-75b7b58d79-s2j2l"] Jan 29 11:03:20 crc kubenswrapper[4593]: W0129 11:03:20.996279 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fa6519b_42fa_4af8_a739_e77110dff723.slice/crio-6f90b6d65c0cdf2253109ad1469b11a32f0d8f181d8f0d1b056b56c2eb5e3b5c WatchSource:0}: Error finding container 6f90b6d65c0cdf2253109ad1469b11a32f0d8f181d8f0d1b056b56c2eb5e3b5c: Status 404 returned error can't find the container with id 6f90b6d65c0cdf2253109ad1469b11a32f0d8f181d8f0d1b056b56c2eb5e3b5c Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.051940 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.086403 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e544204e-7186-4a22-a6bf-79a5101af4b6" path="/var/lib/kubelet/pods/e544204e-7186-4a22-a6bf-79a5101af4b6/volumes" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.152189 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0853e6a7-14da-4065-b7e5-4090e64c8335-config\") pod \"0853e6a7-14da-4065-b7e5-4090e64c8335\" (UID: \"0853e6a7-14da-4065-b7e5-4090e64c8335\") " Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.153244 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0853e6a7-14da-4065-b7e5-4090e64c8335-config" (OuterVolumeSpecName: "config") pod "0853e6a7-14da-4065-b7e5-4090e64c8335" (UID: "0853e6a7-14da-4065-b7e5-4090e64c8335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.153328 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfsgg\" (UniqueName: \"kubernetes.io/projected/0853e6a7-14da-4065-b7e5-4090e64c8335-kube-api-access-gfsgg\") pod \"0853e6a7-14da-4065-b7e5-4090e64c8335\" (UID: \"0853e6a7-14da-4065-b7e5-4090e64c8335\") " Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.153867 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0853e6a7-14da-4065-b7e5-4090e64c8335-client-ca\") pod \"0853e6a7-14da-4065-b7e5-4090e64c8335\" (UID: \"0853e6a7-14da-4065-b7e5-4090e64c8335\") " Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.153912 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0853e6a7-14da-4065-b7e5-4090e64c8335-serving-cert\") pod \"0853e6a7-14da-4065-b7e5-4090e64c8335\" (UID: \"0853e6a7-14da-4065-b7e5-4090e64c8335\") " Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.154132 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0853e6a7-14da-4065-b7e5-4090e64c8335-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.154493 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0853e6a7-14da-4065-b7e5-4090e64c8335-client-ca" (OuterVolumeSpecName: "client-ca") pod "0853e6a7-14da-4065-b7e5-4090e64c8335" (UID: "0853e6a7-14da-4065-b7e5-4090e64c8335"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.159106 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0853e6a7-14da-4065-b7e5-4090e64c8335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0853e6a7-14da-4065-b7e5-4090e64c8335" (UID: "0853e6a7-14da-4065-b7e5-4090e64c8335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.159181 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0853e6a7-14da-4065-b7e5-4090e64c8335-kube-api-access-gfsgg" (OuterVolumeSpecName: "kube-api-access-gfsgg") pod "0853e6a7-14da-4065-b7e5-4090e64c8335" (UID: "0853e6a7-14da-4065-b7e5-4090e64c8335"). InnerVolumeSpecName "kube-api-access-gfsgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.255593 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0853e6a7-14da-4065-b7e5-4090e64c8335-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.255654 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfsgg\" (UniqueName: \"kubernetes.io/projected/0853e6a7-14da-4065-b7e5-4090e64c8335-kube-api-access-gfsgg\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.255673 4593 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0853e6a7-14da-4065-b7e5-4090e64c8335-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.350612 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" event={"ID":"7fa6519b-42fa-4af8-a739-e77110dff723","Type":"ContainerStarted","Data":"6f90b6d65c0cdf2253109ad1469b11a32f0d8f181d8f0d1b056b56c2eb5e3b5c"} Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.351950 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" event={"ID":"0853e6a7-14da-4065-b7e5-4090e64c8335","Type":"ContainerDied","Data":"21ade5a578e280b9b59a20196ece09521420534fe714ba11867382d7f37334ad"} Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.351980 4593 scope.go:117] "RemoveContainer" containerID="bc65351199a792aef25e18639b762df27be08050c27757be6a902bd41f818ecb" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.352077 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.381236 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b"] Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.384385 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b"] Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.492081 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.558975 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b7bc172-8368-4c52-a739-34655c0e9686-serving-cert\") pod \"1b7bc172-8368-4c52-a739-34655c0e9686\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.559056 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmmdp\" (UniqueName: \"kubernetes.io/projected/1b7bc172-8368-4c52-a739-34655c0e9686-kube-api-access-wmmdp\") pod \"1b7bc172-8368-4c52-a739-34655c0e9686\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.559159 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-config\") pod \"1b7bc172-8368-4c52-a739-34655c0e9686\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.559211 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-proxy-ca-bundles\") pod \"1b7bc172-8368-4c52-a739-34655c0e9686\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.559238 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-client-ca\") pod \"1b7bc172-8368-4c52-a739-34655c0e9686\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.560004 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1b7bc172-8368-4c52-a739-34655c0e9686" (UID: "1b7bc172-8368-4c52-a739-34655c0e9686"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.560086 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-config" (OuterVolumeSpecName: "config") pod "1b7bc172-8368-4c52-a739-34655c0e9686" (UID: "1b7bc172-8368-4c52-a739-34655c0e9686"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.560134 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-client-ca" (OuterVolumeSpecName: "client-ca") pod "1b7bc172-8368-4c52-a739-34655c0e9686" (UID: "1b7bc172-8368-4c52-a739-34655c0e9686"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.575924 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b7bc172-8368-4c52-a739-34655c0e9686-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1b7bc172-8368-4c52-a739-34655c0e9686" (UID: "1b7bc172-8368-4c52-a739-34655c0e9686"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.575992 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b7bc172-8368-4c52-a739-34655c0e9686-kube-api-access-wmmdp" (OuterVolumeSpecName: "kube-api-access-wmmdp") pod "1b7bc172-8368-4c52-a739-34655c0e9686" (UID: "1b7bc172-8368-4c52-a739-34655c0e9686"). InnerVolumeSpecName "kube-api-access-wmmdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.660879 4593 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.660934 4593 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.660946 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b7bc172-8368-4c52-a739-34655c0e9686-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.660958 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmmdp\" (UniqueName: \"kubernetes.io/projected/1b7bc172-8368-4c52-a739-34655c0e9686-kube-api-access-wmmdp\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.660972 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.168699 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kt56h" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.169317 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kt56h" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.359413 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" event={"ID":"1b7bc172-8368-4c52-a739-34655c0e9686","Type":"ContainerDied","Data":"a0d208891d18d712bd489561852a82f696e7d25c808617b7fe312d4e3430e177"} Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.359462 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.359824 4593 scope.go:117] "RemoveContainer" containerID="efb497ce95c8b16f5f44e4fd898aa8797a4e7f63f9e2310f49fd9b1e6b2b5c23" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.363063 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" event={"ID":"7fa6519b-42fa-4af8-a739-e77110dff723","Type":"ContainerStarted","Data":"6e770a6481464a86de15f3f2462eee83bfaa47f18624d09d1bb8334e0c3a28c5"} Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.363412 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.386792 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" podStartSLOduration=30.386772095 podStartE2EDuration="30.386772095s" podCreationTimestamp="2026-01-29 11:02:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:03:22.385562932 +0000 UTC m=+268.258597123" watchObservedRunningTime="2026-01-29 11:03:22.386772095 +0000 UTC m=+268.259806296" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.387334 4593 patch_prober.go:28] interesting pod/controller-manager-5b5b564f5c-4lr6v container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": context deadline exceeded" start-of-body= Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.387393 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" podUID="1b7bc172-8368-4c52-a739-34655c0e9686" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": context deadline exceeded" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.415897 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v"] Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.419500 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v"] Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.513296 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.610384 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-g72zl"] Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.773226 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-784bc8c69-h6rvq"] Jan 29 11:03:22 crc kubenswrapper[4593]: E0129 11:03:22.773464 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0853e6a7-14da-4065-b7e5-4090e64c8335" containerName="route-controller-manager" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.773479 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="0853e6a7-14da-4065-b7e5-4090e64c8335" containerName="route-controller-manager" Jan 29 11:03:22 crc kubenswrapper[4593]: E0129 11:03:22.773495 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b7bc172-8368-4c52-a739-34655c0e9686" containerName="controller-manager" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.773503 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b7bc172-8368-4c52-a739-34655c0e9686" containerName="controller-manager" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.773649 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="0853e6a7-14da-4065-b7e5-4090e64c8335" containerName="route-controller-manager" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.773664 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b7bc172-8368-4c52-a739-34655c0e9686" containerName="controller-manager" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.774166 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.780804 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.781718 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.781728 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.781972 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.784505 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.784702 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.816039 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.878907 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-784bc8c69-h6rvq"] Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.890006 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ddee183-1516-4cc4-96c3-ee15973bfd37-proxy-ca-bundles\") pod \"controller-manager-784bc8c69-h6rvq\" (UID: \"6ddee183-1516-4cc4-96c3-ee15973bfd37\") " pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.890065 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc7vc\" (UniqueName: \"kubernetes.io/projected/6ddee183-1516-4cc4-96c3-ee15973bfd37-kube-api-access-hc7vc\") pod \"controller-manager-784bc8c69-h6rvq\" (UID: \"6ddee183-1516-4cc4-96c3-ee15973bfd37\") " pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.890110 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ddee183-1516-4cc4-96c3-ee15973bfd37-config\") pod \"controller-manager-784bc8c69-h6rvq\" (UID: \"6ddee183-1516-4cc4-96c3-ee15973bfd37\") " pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.890136 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ddee183-1516-4cc4-96c3-ee15973bfd37-client-ca\") pod \"controller-manager-784bc8c69-h6rvq\" (UID: \"6ddee183-1516-4cc4-96c3-ee15973bfd37\") " pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.890162 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ddee183-1516-4cc4-96c3-ee15973bfd37-serving-cert\") pod \"controller-manager-784bc8c69-h6rvq\" (UID: \"6ddee183-1516-4cc4-96c3-ee15973bfd37\") " pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.951856 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kt56h" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.991673 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc7vc\" (UniqueName: \"kubernetes.io/projected/6ddee183-1516-4cc4-96c3-ee15973bfd37-kube-api-access-hc7vc\") pod \"controller-manager-784bc8c69-h6rvq\" (UID: \"6ddee183-1516-4cc4-96c3-ee15973bfd37\") " pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.991734 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ddee183-1516-4cc4-96c3-ee15973bfd37-config\") pod \"controller-manager-784bc8c69-h6rvq\" (UID: \"6ddee183-1516-4cc4-96c3-ee15973bfd37\") " pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.991766 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ddee183-1516-4cc4-96c3-ee15973bfd37-client-ca\") pod \"controller-manager-784bc8c69-h6rvq\" (UID: \"6ddee183-1516-4cc4-96c3-ee15973bfd37\") " pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.991791 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ddee183-1516-4cc4-96c3-ee15973bfd37-serving-cert\") pod \"controller-manager-784bc8c69-h6rvq\" (UID: \"6ddee183-1516-4cc4-96c3-ee15973bfd37\") " pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.991812 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ddee183-1516-4cc4-96c3-ee15973bfd37-proxy-ca-bundles\") pod \"controller-manager-784bc8c69-h6rvq\" (UID: \"6ddee183-1516-4cc4-96c3-ee15973bfd37\") " pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.992762 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ddee183-1516-4cc4-96c3-ee15973bfd37-client-ca\") pod \"controller-manager-784bc8c69-h6rvq\" (UID: \"6ddee183-1516-4cc4-96c3-ee15973bfd37\") " pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.992939 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ddee183-1516-4cc4-96c3-ee15973bfd37-proxy-ca-bundles\") pod \"controller-manager-784bc8c69-h6rvq\" (UID: \"6ddee183-1516-4cc4-96c3-ee15973bfd37\") " pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.993291 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ddee183-1516-4cc4-96c3-ee15973bfd37-config\") pod \"controller-manager-784bc8c69-h6rvq\" (UID: \"6ddee183-1516-4cc4-96c3-ee15973bfd37\") " pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.001340 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ddee183-1516-4cc4-96c3-ee15973bfd37-serving-cert\") pod \"controller-manager-784bc8c69-h6rvq\" (UID: \"6ddee183-1516-4cc4-96c3-ee15973bfd37\") " pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.035361 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc7vc\" (UniqueName: \"kubernetes.io/projected/6ddee183-1516-4cc4-96c3-ee15973bfd37-kube-api-access-hc7vc\") pod \"controller-manager-784bc8c69-h6rvq\" (UID: \"6ddee183-1516-4cc4-96c3-ee15973bfd37\") " pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.051706 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.081251 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0853e6a7-14da-4065-b7e5-4090e64c8335" path="/var/lib/kubelet/pods/0853e6a7-14da-4065-b7e5-4090e64c8335/volumes" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.082133 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b7bc172-8368-4c52-a739-34655c0e9686" path="/var/lib/kubelet/pods/1b7bc172-8368-4c52-a739-34655c0e9686/volumes" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.088539 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.467176 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kt56h" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.773751 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb"] Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.774544 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.776811 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.776877 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.776811 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.776932 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.777176 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.783534 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb"] Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.786104 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.903232 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbrbn\" (UniqueName: \"kubernetes.io/projected/d6728980-2950-4c7e-b09d-cae4db914258-kube-api-access-nbrbn\") pod \"route-controller-manager-6dd454476b-t4npb\" (UID: \"d6728980-2950-4c7e-b09d-cae4db914258\") " pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.903303 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6728980-2950-4c7e-b09d-cae4db914258-serving-cert\") pod \"route-controller-manager-6dd454476b-t4npb\" (UID: \"d6728980-2950-4c7e-b09d-cae4db914258\") " pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.903329 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6728980-2950-4c7e-b09d-cae4db914258-config\") pod \"route-controller-manager-6dd454476b-t4npb\" (UID: \"d6728980-2950-4c7e-b09d-cae4db914258\") " pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.903355 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6728980-2950-4c7e-b09d-cae4db914258-client-ca\") pod \"route-controller-manager-6dd454476b-t4npb\" (UID: \"d6728980-2950-4c7e-b09d-cae4db914258\") " pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" Jan 29 11:03:24 crc kubenswrapper[4593]: I0129 11:03:24.004919 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbrbn\" (UniqueName: \"kubernetes.io/projected/d6728980-2950-4c7e-b09d-cae4db914258-kube-api-access-nbrbn\") pod \"route-controller-manager-6dd454476b-t4npb\" (UID: \"d6728980-2950-4c7e-b09d-cae4db914258\") " pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" Jan 29 11:03:24 crc kubenswrapper[4593]: I0129 11:03:24.005261 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6728980-2950-4c7e-b09d-cae4db914258-serving-cert\") pod \"route-controller-manager-6dd454476b-t4npb\" (UID: \"d6728980-2950-4c7e-b09d-cae4db914258\") " pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" Jan 29 11:03:24 crc kubenswrapper[4593]: I0129 11:03:24.005355 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6728980-2950-4c7e-b09d-cae4db914258-config\") pod \"route-controller-manager-6dd454476b-t4npb\" (UID: \"d6728980-2950-4c7e-b09d-cae4db914258\") " pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" Jan 29 11:03:24 crc kubenswrapper[4593]: I0129 11:03:24.005443 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6728980-2950-4c7e-b09d-cae4db914258-client-ca\") pod \"route-controller-manager-6dd454476b-t4npb\" (UID: \"d6728980-2950-4c7e-b09d-cae4db914258\") " pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" Jan 29 11:03:24 crc kubenswrapper[4593]: I0129 11:03:24.006354 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6728980-2950-4c7e-b09d-cae4db914258-client-ca\") pod \"route-controller-manager-6dd454476b-t4npb\" (UID: \"d6728980-2950-4c7e-b09d-cae4db914258\") " pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" Jan 29 11:03:24 crc kubenswrapper[4593]: I0129 11:03:24.006852 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6728980-2950-4c7e-b09d-cae4db914258-config\") pod \"route-controller-manager-6dd454476b-t4npb\" (UID: \"d6728980-2950-4c7e-b09d-cae4db914258\") " pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" Jan 29 11:03:24 crc kubenswrapper[4593]: I0129 11:03:24.014804 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6728980-2950-4c7e-b09d-cae4db914258-serving-cert\") pod \"route-controller-manager-6dd454476b-t4npb\" (UID: \"d6728980-2950-4c7e-b09d-cae4db914258\") " pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" Jan 29 11:03:24 crc kubenswrapper[4593]: I0129 11:03:24.021736 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbrbn\" (UniqueName: \"kubernetes.io/projected/d6728980-2950-4c7e-b09d-cae4db914258-kube-api-access-nbrbn\") pod \"route-controller-manager-6dd454476b-t4npb\" (UID: \"d6728980-2950-4c7e-b09d-cae4db914258\") " pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" Jan 29 11:03:24 crc kubenswrapper[4593]: I0129 11:03:24.098786 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" Jan 29 11:03:27 crc kubenswrapper[4593]: I0129 11:03:27.774085 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-t7wn4" Jan 29 11:03:32 crc kubenswrapper[4593]: I0129 11:03:32.772489 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-784bc8c69-h6rvq"] Jan 29 11:03:32 crc kubenswrapper[4593]: I0129 11:03:32.779045 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb"] Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.440781 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" event={"ID":"d6728980-2950-4c7e-b09d-cae4db914258","Type":"ContainerStarted","Data":"1b2b9e787bfa050fc341035a11b4cf967f296b555dead5093c4663216ce62282"} Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.441792 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" event={"ID":"d6728980-2950-4c7e-b09d-cae4db914258","Type":"ContainerStarted","Data":"c25f16ca8313c76ec2eaad0c1786b65a4cf02ff766d8c72679738fd5de55b623"} Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.442413 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.443500 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-57v5l" event={"ID":"3ae70d27-10ec-4015-851d-d84aaf99d782","Type":"ContainerStarted","Data":"1ce53b2d0b99b2d6bb3eb602b1207e6091bd4890c409dac160c98e3d3e644ad4"} Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.445336 4593 generic.go:334] "Generic (PLEG): container finished" podID="69a313ce-b443-4080-9eea-bde0c61dc59d" containerID="4b372ce4759d57dd107215b9809c6dedc94cb89c19e57bfaa5d8813228456028" exitCode=0 Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.445410 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2f96" event={"ID":"69a313ce-b443-4080-9eea-bde0c61dc59d","Type":"ContainerDied","Data":"4b372ce4759d57dd107215b9809c6dedc94cb89c19e57bfaa5d8813228456028"} Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.449156 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vbjtl" event={"ID":"954251cb-5bea-456e-8d36-27eda2fe92d6","Type":"ContainerStarted","Data":"965f550baeaa01cf189d37cd289f67433885e86d9afdfae25850d9668a83e5eb"} Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.451592 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" event={"ID":"6ddee183-1516-4cc4-96c3-ee15973bfd37","Type":"ContainerStarted","Data":"817c7022ca4e52724cb75331da50e95d1974eac52c110d92826abd12ca66762a"} Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.451648 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" event={"ID":"6ddee183-1516-4cc4-96c3-ee15973bfd37","Type":"ContainerStarted","Data":"c95faa64d73eb92d048669d1a66e4c361409f4086b95ed49fd5e768d25706c2f"} Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.452599 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.460858 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.477748 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" podStartSLOduration=14.477708344 podStartE2EDuration="14.477708344s" podCreationTimestamp="2026-01-29 11:03:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:03:33.474281161 +0000 UTC m=+279.347315372" watchObservedRunningTime="2026-01-29 11:03:33.477708344 +0000 UTC m=+279.350742535" Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.515291 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-57v5l" podStartSLOduration=4.6060970900000004 podStartE2EDuration="27.515277604s" podCreationTimestamp="2026-01-29 11:03:06 +0000 UTC" firstStartedPulling="2026-01-29 11:03:09.126709612 +0000 UTC m=+254.999743803" lastFinishedPulling="2026-01-29 11:03:32.035890126 +0000 UTC m=+277.908924317" observedRunningTime="2026-01-29 11:03:33.511972704 +0000 UTC m=+279.385006895" watchObservedRunningTime="2026-01-29 11:03:33.515277604 +0000 UTC m=+279.388311795" Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.561921 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vbjtl" podStartSLOduration=4.12171246 podStartE2EDuration="30.56190273s" podCreationTimestamp="2026-01-29 11:03:03 +0000 UTC" firstStartedPulling="2026-01-29 11:03:05.700866881 +0000 UTC m=+251.573901072" lastFinishedPulling="2026-01-29 11:03:32.141057151 +0000 UTC m=+278.014091342" observedRunningTime="2026-01-29 11:03:33.560658406 +0000 UTC m=+279.433692597" watchObservedRunningTime="2026-01-29 11:03:33.56190273 +0000 UTC m=+279.434936911" Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.608769 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" podStartSLOduration=14.608749772 podStartE2EDuration="14.608749772s" podCreationTimestamp="2026-01-29 11:03:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:03:33.605782211 +0000 UTC m=+279.478816422" watchObservedRunningTime="2026-01-29 11:03:33.608749772 +0000 UTC m=+279.481783963" Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.703650 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.934316 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vbjtl" Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.934505 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vbjtl" Jan 29 11:03:34 crc kubenswrapper[4593]: I0129 11:03:34.461436 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2f96" event={"ID":"69a313ce-b443-4080-9eea-bde0c61dc59d","Type":"ContainerStarted","Data":"338033a6a905298191ca2e1da847e7c408756ddb734b172e1d817bed36172496"} Jan 29 11:03:34 crc kubenswrapper[4593]: I0129 11:03:34.483542 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v2f96" podStartSLOduration=9.66208734 podStartE2EDuration="26.483508093s" podCreationTimestamp="2026-01-29 11:03:08 +0000 UTC" firstStartedPulling="2026-01-29 11:03:17.256858528 +0000 UTC m=+263.129892719" lastFinishedPulling="2026-01-29 11:03:34.078279281 +0000 UTC m=+279.951313472" observedRunningTime="2026-01-29 11:03:34.477533861 +0000 UTC m=+280.350568062" watchObservedRunningTime="2026-01-29 11:03:34.483508093 +0000 UTC m=+280.356542294" Jan 29 11:03:35 crc kubenswrapper[4593]: I0129 11:03:35.111464 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vbjtl" podUID="954251cb-5bea-456e-8d36-27eda2fe92d6" containerName="registry-server" probeResult="failure" output=< Jan 29 11:03:35 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:03:35 crc kubenswrapper[4593]: > Jan 29 11:03:36 crc kubenswrapper[4593]: I0129 11:03:36.536033 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-57v5l" Jan 29 11:03:36 crc kubenswrapper[4593]: I0129 11:03:36.538657 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-57v5l" Jan 29 11:03:36 crc kubenswrapper[4593]: I0129 11:03:36.580702 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-57v5l" Jan 29 11:03:37 crc kubenswrapper[4593]: I0129 11:03:37.537369 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-57v5l" Jan 29 11:03:38 crc kubenswrapper[4593]: I0129 11:03:38.938995 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v2f96" Jan 29 11:03:38 crc kubenswrapper[4593]: I0129 11:03:38.939083 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v2f96" Jan 29 11:03:38 crc kubenswrapper[4593]: I0129 11:03:38.986020 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v2f96" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.367254 4593 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.368431 4593 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.368530 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.369008 4593 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 11:03:39 crc kubenswrapper[4593]: E0129 11:03:39.369173 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.369189 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 11:03:39 crc kubenswrapper[4593]: E0129 11:03:39.369198 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.369206 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 11:03:39 crc kubenswrapper[4593]: E0129 11:03:39.369214 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.369220 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 11:03:39 crc kubenswrapper[4593]: E0129 11:03:39.369227 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.369233 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 11:03:39 crc kubenswrapper[4593]: E0129 11:03:39.369241 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.369247 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 11:03:39 crc kubenswrapper[4593]: E0129 11:03:39.369257 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.369262 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 11:03:39 crc kubenswrapper[4593]: E0129 11:03:39.369270 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.369275 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.369367 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.369378 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.369386 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.369395 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.369403 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.369410 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.369419 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 11:03:39 crc kubenswrapper[4593]: E0129 11:03:39.369501 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.369507 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.400300 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.486583 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9" gracePeriod=15 Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.486652 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a" gracePeriod=15 Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.486662 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284" gracePeriod=15 Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.486711 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264" gracePeriod=15 Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.486741 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3" gracePeriod=15 Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.542685 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.542745 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.542762 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.542782 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.542821 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.542840 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.542859 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.542931 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.583657 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v2f96" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.584405 4593 status_manager.go:851] "Failed to get status for pod" podUID="69a313ce-b443-4080-9eea-bde0c61dc59d" pod="openshift-marketplace/redhat-marketplace-v2f96" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-v2f96\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.584883 4593 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.585188 4593 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.643807 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.643873 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.643904 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.643931 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.644003 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.644050 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.644076 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.644095 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.644170 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.644217 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.644243 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.644271 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.644299 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.644849 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.645164 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.645216 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.695343 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: W0129 11:03:39.714224 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-31221c1349d3237fa02258feb4c5cf7aaa06324b121458a675e853b98f806479 WatchSource:0}: Error finding container 31221c1349d3237fa02258feb4c5cf7aaa06324b121458a675e853b98f806479: Status 404 returned error can't find the container with id 31221c1349d3237fa02258feb4c5cf7aaa06324b121458a675e853b98f806479 Jan 29 11:03:39 crc kubenswrapper[4593]: E0129 11:03:39.716743 4593 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f2ec912a95dbe openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 11:03:39.716287934 +0000 UTC m=+285.589322135,LastTimestamp:2026-01-29 11:03:39.716287934 +0000 UTC m=+285.589322135,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 11:03:40 crc kubenswrapper[4593]: I0129 11:03:40.493310 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1b0b3411fc4372f421b034e112b06a82b1bf3bfcd9f80166476dda55319b85fa"} Jan 29 11:03:40 crc kubenswrapper[4593]: I0129 11:03:40.493732 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"31221c1349d3237fa02258feb4c5cf7aaa06324b121458a675e853b98f806479"} Jan 29 11:03:40 crc kubenswrapper[4593]: I0129 11:03:40.494486 4593 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:40 crc kubenswrapper[4593]: I0129 11:03:40.494977 4593 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:40 crc kubenswrapper[4593]: I0129 11:03:40.495517 4593 status_manager.go:851] "Failed to get status for pod" podUID="69a313ce-b443-4080-9eea-bde0c61dc59d" pod="openshift-marketplace/redhat-marketplace-v2f96" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-v2f96\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:41 crc kubenswrapper[4593]: I0129 11:03:41.007087 4593 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 29 11:03:41 crc kubenswrapper[4593]: I0129 11:03:41.007161 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 29 11:03:41 crc kubenswrapper[4593]: I0129 11:03:41.505175 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 11:03:41 crc kubenswrapper[4593]: I0129 11:03:41.507180 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 11:03:41 crc kubenswrapper[4593]: I0129 11:03:41.508043 4593 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284" exitCode=0 Jan 29 11:03:41 crc kubenswrapper[4593]: I0129 11:03:41.508092 4593 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a" exitCode=0 Jan 29 11:03:41 crc kubenswrapper[4593]: I0129 11:03:41.508104 4593 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3" exitCode=0 Jan 29 11:03:41 crc kubenswrapper[4593]: I0129 11:03:41.508115 4593 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264" exitCode=2 Jan 29 11:03:41 crc kubenswrapper[4593]: I0129 11:03:41.509271 4593 scope.go:117] "RemoveContainer" containerID="68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.479877 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.481662 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.482586 4593 status_manager.go:851] "Failed to get status for pod" podUID="69a313ce-b443-4080-9eea-bde0c61dc59d" pod="openshift-marketplace/redhat-marketplace-v2f96" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-v2f96\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.483135 4593 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.483581 4593 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.516855 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.519004 4593 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9" exitCode=0 Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.519060 4593 scope.go:117] "RemoveContainer" containerID="c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.519224 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.536042 4593 scope.go:117] "RemoveContainer" containerID="5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.540148 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.540188 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.540219 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.540284 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.540282 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.540344 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.541428 4593 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.541451 4593 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.541461 4593 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.557166 4593 scope.go:117] "RemoveContainer" containerID="0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.576650 4593 scope.go:117] "RemoveContainer" containerID="d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.594954 4593 scope.go:117] "RemoveContainer" containerID="5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.611849 4593 scope.go:117] "RemoveContainer" containerID="f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.704843 4593 scope.go:117] "RemoveContainer" containerID="c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284" Jan 29 11:03:42 crc kubenswrapper[4593]: E0129 11:03:42.707703 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\": container with ID starting with c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284 not found: ID does not exist" containerID="c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.707906 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284"} err="failed to get container status \"c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\": rpc error: code = NotFound desc = could not find container \"c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\": container with ID starting with c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284 not found: ID does not exist" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.708016 4593 scope.go:117] "RemoveContainer" containerID="5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a" Jan 29 11:03:42 crc kubenswrapper[4593]: E0129 11:03:42.709337 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\": container with ID starting with 5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a not found: ID does not exist" containerID="5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.709479 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a"} err="failed to get container status \"5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\": rpc error: code = NotFound desc = could not find container \"5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\": container with ID starting with 5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a not found: ID does not exist" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.709686 4593 scope.go:117] "RemoveContainer" containerID="0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3" Jan 29 11:03:42 crc kubenswrapper[4593]: E0129 11:03:42.712286 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\": container with ID starting with 0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3 not found: ID does not exist" containerID="0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.712347 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3"} err="failed to get container status \"0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\": rpc error: code = NotFound desc = could not find container \"0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\": container with ID starting with 0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3 not found: ID does not exist" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.712376 4593 scope.go:117] "RemoveContainer" containerID="d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264" Jan 29 11:03:42 crc kubenswrapper[4593]: E0129 11:03:42.713594 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\": container with ID starting with d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264 not found: ID does not exist" containerID="d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.713645 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264"} err="failed to get container status \"d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\": rpc error: code = NotFound desc = could not find container \"d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\": container with ID starting with d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264 not found: ID does not exist" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.713668 4593 scope.go:117] "RemoveContainer" containerID="5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9" Jan 29 11:03:42 crc kubenswrapper[4593]: E0129 11:03:42.715065 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\": container with ID starting with 5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9 not found: ID does not exist" containerID="5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.715088 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9"} err="failed to get container status \"5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\": rpc error: code = NotFound desc = could not find container \"5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\": container with ID starting with 5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9 not found: ID does not exist" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.715101 4593 scope.go:117] "RemoveContainer" containerID="f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece" Jan 29 11:03:42 crc kubenswrapper[4593]: E0129 11:03:42.715521 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\": container with ID starting with f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece not found: ID does not exist" containerID="f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.715553 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece"} err="failed to get container status \"f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\": rpc error: code = NotFound desc = could not find container \"f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\": container with ID starting with f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece not found: ID does not exist" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.835184 4593 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.835823 4593 status_manager.go:851] "Failed to get status for pod" podUID="69a313ce-b443-4080-9eea-bde0c61dc59d" pod="openshift-marketplace/redhat-marketplace-v2f96" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-v2f96\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.836345 4593 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:43 crc kubenswrapper[4593]: I0129 11:03:43.114172 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 29 11:03:43 crc kubenswrapper[4593]: I0129 11:03:43.971882 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vbjtl" Jan 29 11:03:43 crc kubenswrapper[4593]: I0129 11:03:43.973164 4593 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:43 crc kubenswrapper[4593]: I0129 11:03:43.973516 4593 status_manager.go:851] "Failed to get status for pod" podUID="954251cb-5bea-456e-8d36-27eda2fe92d6" pod="openshift-marketplace/redhat-operators-vbjtl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vbjtl\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:43 crc kubenswrapper[4593]: I0129 11:03:43.973840 4593 status_manager.go:851] "Failed to get status for pod" podUID="69a313ce-b443-4080-9eea-bde0c61dc59d" pod="openshift-marketplace/redhat-marketplace-v2f96" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-v2f96\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:44 crc kubenswrapper[4593]: I0129 11:03:44.011958 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vbjtl" Jan 29 11:03:44 crc kubenswrapper[4593]: I0129 11:03:44.012599 4593 status_manager.go:851] "Failed to get status for pod" podUID="954251cb-5bea-456e-8d36-27eda2fe92d6" pod="openshift-marketplace/redhat-operators-vbjtl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vbjtl\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:44 crc kubenswrapper[4593]: I0129 11:03:44.013278 4593 status_manager.go:851] "Failed to get status for pod" podUID="69a313ce-b443-4080-9eea-bde0c61dc59d" pod="openshift-marketplace/redhat-marketplace-v2f96" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-v2f96\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:44 crc kubenswrapper[4593]: I0129 11:03:44.014046 4593 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:45 crc kubenswrapper[4593]: I0129 11:03:45.077338 4593 status_manager.go:851] "Failed to get status for pod" podUID="69a313ce-b443-4080-9eea-bde0c61dc59d" pod="openshift-marketplace/redhat-marketplace-v2f96" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-v2f96\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:45 crc kubenswrapper[4593]: I0129 11:03:45.078096 4593 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:45 crc kubenswrapper[4593]: I0129 11:03:45.078547 4593 status_manager.go:851] "Failed to get status for pod" podUID="954251cb-5bea-456e-8d36-27eda2fe92d6" pod="openshift-marketplace/redhat-operators-vbjtl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vbjtl\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:45 crc kubenswrapper[4593]: I0129 11:03:45.541871 4593 generic.go:334] "Generic (PLEG): container finished" podID="c78186dc-c8e4-4018-8e50-f7fc0e719890" containerID="1944570fd0d711d5a3ddcb6c09ae1efbc4f659af6ced43239c4b6ab7e0c86a58" exitCode=0 Jan 29 11:03:45 crc kubenswrapper[4593]: I0129 11:03:45.541927 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c78186dc-c8e4-4018-8e50-f7fc0e719890","Type":"ContainerDied","Data":"1944570fd0d711d5a3ddcb6c09ae1efbc4f659af6ced43239c4b6ab7e0c86a58"} Jan 29 11:03:45 crc kubenswrapper[4593]: I0129 11:03:45.542773 4593 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:45 crc kubenswrapper[4593]: I0129 11:03:45.543319 4593 status_manager.go:851] "Failed to get status for pod" podUID="c78186dc-c8e4-4018-8e50-f7fc0e719890" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:45 crc kubenswrapper[4593]: I0129 11:03:45.543976 4593 status_manager.go:851] "Failed to get status for pod" podUID="954251cb-5bea-456e-8d36-27eda2fe92d6" pod="openshift-marketplace/redhat-operators-vbjtl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vbjtl\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:45 crc kubenswrapper[4593]: I0129 11:03:45.544419 4593 status_manager.go:851] "Failed to get status for pod" podUID="69a313ce-b443-4080-9eea-bde0c61dc59d" pod="openshift-marketplace/redhat-marketplace-v2f96" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-v2f96\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:46 crc kubenswrapper[4593]: E0129 11:03:46.136721 4593 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f2ec912a95dbe openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 11:03:39.716287934 +0000 UTC m=+285.589322135,LastTimestamp:2026-01-29 11:03:39.716287934 +0000 UTC m=+285.589322135,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 11:03:46 crc kubenswrapper[4593]: E0129 11:03:46.765854 4593 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:46 crc kubenswrapper[4593]: E0129 11:03:46.766095 4593 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:46 crc kubenswrapper[4593]: E0129 11:03:46.766335 4593 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:46 crc kubenswrapper[4593]: E0129 11:03:46.766558 4593 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:46 crc kubenswrapper[4593]: E0129 11:03:46.767085 4593 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:46 crc kubenswrapper[4593]: I0129 11:03:46.767108 4593 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 29 11:03:46 crc kubenswrapper[4593]: E0129 11:03:46.767306 4593 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="200ms" Jan 29 11:03:46 crc kubenswrapper[4593]: I0129 11:03:46.865502 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:03:46 crc kubenswrapper[4593]: I0129 11:03:46.866588 4593 status_manager.go:851] "Failed to get status for pod" podUID="69a313ce-b443-4080-9eea-bde0c61dc59d" pod="openshift-marketplace/redhat-marketplace-v2f96" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-v2f96\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:46 crc kubenswrapper[4593]: I0129 11:03:46.866895 4593 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:46 crc kubenswrapper[4593]: I0129 11:03:46.867175 4593 status_manager.go:851] "Failed to get status for pod" podUID="c78186dc-c8e4-4018-8e50-f7fc0e719890" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:46 crc kubenswrapper[4593]: I0129 11:03:46.867416 4593 status_manager.go:851] "Failed to get status for pod" podUID="954251cb-5bea-456e-8d36-27eda2fe92d6" pod="openshift-marketplace/redhat-operators-vbjtl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vbjtl\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:46 crc kubenswrapper[4593]: E0129 11:03:46.968603 4593 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="400ms" Jan 29 11:03:46 crc kubenswrapper[4593]: I0129 11:03:46.999249 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c78186dc-c8e4-4018-8e50-f7fc0e719890-kube-api-access\") pod \"c78186dc-c8e4-4018-8e50-f7fc0e719890\" (UID: \"c78186dc-c8e4-4018-8e50-f7fc0e719890\") " Jan 29 11:03:46 crc kubenswrapper[4593]: I0129 11:03:46.999392 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c78186dc-c8e4-4018-8e50-f7fc0e719890-var-lock\") pod \"c78186dc-c8e4-4018-8e50-f7fc0e719890\" (UID: \"c78186dc-c8e4-4018-8e50-f7fc0e719890\") " Jan 29 11:03:46 crc kubenswrapper[4593]: I0129 11:03:46.999508 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c78186dc-c8e4-4018-8e50-f7fc0e719890-kubelet-dir\") pod \"c78186dc-c8e4-4018-8e50-f7fc0e719890\" (UID: \"c78186dc-c8e4-4018-8e50-f7fc0e719890\") " Jan 29 11:03:47 crc kubenswrapper[4593]: I0129 11:03:46.999972 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c78186dc-c8e4-4018-8e50-f7fc0e719890-var-lock" (OuterVolumeSpecName: "var-lock") pod "c78186dc-c8e4-4018-8e50-f7fc0e719890" (UID: "c78186dc-c8e4-4018-8e50-f7fc0e719890"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:03:47 crc kubenswrapper[4593]: I0129 11:03:47.000099 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c78186dc-c8e4-4018-8e50-f7fc0e719890-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c78186dc-c8e4-4018-8e50-f7fc0e719890" (UID: "c78186dc-c8e4-4018-8e50-f7fc0e719890"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:03:47 crc kubenswrapper[4593]: I0129 11:03:47.005917 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c78186dc-c8e4-4018-8e50-f7fc0e719890-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c78186dc-c8e4-4018-8e50-f7fc0e719890" (UID: "c78186dc-c8e4-4018-8e50-f7fc0e719890"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:47 crc kubenswrapper[4593]: I0129 11:03:47.100893 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c78186dc-c8e4-4018-8e50-f7fc0e719890-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:47 crc kubenswrapper[4593]: I0129 11:03:47.100945 4593 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c78186dc-c8e4-4018-8e50-f7fc0e719890-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:47 crc kubenswrapper[4593]: I0129 11:03:47.100963 4593 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c78186dc-c8e4-4018-8e50-f7fc0e719890-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:47 crc kubenswrapper[4593]: E0129 11:03:47.369887 4593 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="800ms" Jan 29 11:03:47 crc kubenswrapper[4593]: I0129 11:03:47.558812 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c78186dc-c8e4-4018-8e50-f7fc0e719890","Type":"ContainerDied","Data":"3e9832e7b98d23dae1b2fb65f8187f83a370fb734395c68300087fa85959095b"} Jan 29 11:03:47 crc kubenswrapper[4593]: I0129 11:03:47.559239 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e9832e7b98d23dae1b2fb65f8187f83a370fb734395c68300087fa85959095b" Jan 29 11:03:47 crc kubenswrapper[4593]: I0129 11:03:47.558878 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:03:47 crc kubenswrapper[4593]: I0129 11:03:47.563242 4593 status_manager.go:851] "Failed to get status for pod" podUID="c78186dc-c8e4-4018-8e50-f7fc0e719890" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:47 crc kubenswrapper[4593]: I0129 11:03:47.563883 4593 status_manager.go:851] "Failed to get status for pod" podUID="954251cb-5bea-456e-8d36-27eda2fe92d6" pod="openshift-marketplace/redhat-operators-vbjtl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vbjtl\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:47 crc kubenswrapper[4593]: I0129 11:03:47.564263 4593 status_manager.go:851] "Failed to get status for pod" podUID="69a313ce-b443-4080-9eea-bde0c61dc59d" pod="openshift-marketplace/redhat-marketplace-v2f96" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-v2f96\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:47 crc kubenswrapper[4593]: I0129 11:03:47.564682 4593 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:47 crc kubenswrapper[4593]: I0129 11:03:47.656352 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" podUID="066b2b93-4946-44cf-9757-05c8282cb7a3" containerName="registry" containerID="cri-o://b0eae5ecd0f07f39d4a301805b28646763eb88458f87677425443839cbdb4cd3" gracePeriod=30 Jan 29 11:03:48 crc kubenswrapper[4593]: E0129 11:03:48.170711 4593 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="1.6s" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.250695 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.251359 4593 status_manager.go:851] "Failed to get status for pod" podUID="c78186dc-c8e4-4018-8e50-f7fc0e719890" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.251654 4593 status_manager.go:851] "Failed to get status for pod" podUID="954251cb-5bea-456e-8d36-27eda2fe92d6" pod="openshift-marketplace/redhat-operators-vbjtl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vbjtl\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.251925 4593 status_manager.go:851] "Failed to get status for pod" podUID="066b2b93-4946-44cf-9757-05c8282cb7a3" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-697d97f7c8-g72zl\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.252197 4593 status_manager.go:851] "Failed to get status for pod" podUID="69a313ce-b443-4080-9eea-bde0c61dc59d" pod="openshift-marketplace/redhat-marketplace-v2f96" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-v2f96\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.252479 4593 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.317441 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/066b2b93-4946-44cf-9757-05c8282cb7a3-ca-trust-extracted\") pod \"066b2b93-4946-44cf-9757-05c8282cb7a3\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.317513 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/066b2b93-4946-44cf-9757-05c8282cb7a3-installation-pull-secrets\") pod \"066b2b93-4946-44cf-9757-05c8282cb7a3\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.317744 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"066b2b93-4946-44cf-9757-05c8282cb7a3\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.317830 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/066b2b93-4946-44cf-9757-05c8282cb7a3-trusted-ca\") pod \"066b2b93-4946-44cf-9757-05c8282cb7a3\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.317897 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/066b2b93-4946-44cf-9757-05c8282cb7a3-registry-certificates\") pod \"066b2b93-4946-44cf-9757-05c8282cb7a3\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.317938 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9stq9\" (UniqueName: \"kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-kube-api-access-9stq9\") pod \"066b2b93-4946-44cf-9757-05c8282cb7a3\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.317978 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-registry-tls\") pod \"066b2b93-4946-44cf-9757-05c8282cb7a3\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.318014 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-bound-sa-token\") pod \"066b2b93-4946-44cf-9757-05c8282cb7a3\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.318476 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/066b2b93-4946-44cf-9757-05c8282cb7a3-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "066b2b93-4946-44cf-9757-05c8282cb7a3" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.318653 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/066b2b93-4946-44cf-9757-05c8282cb7a3-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "066b2b93-4946-44cf-9757-05c8282cb7a3" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.322379 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/066b2b93-4946-44cf-9757-05c8282cb7a3-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "066b2b93-4946-44cf-9757-05c8282cb7a3" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.322627 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "066b2b93-4946-44cf-9757-05c8282cb7a3" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.323037 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "066b2b93-4946-44cf-9757-05c8282cb7a3" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.327216 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-kube-api-access-9stq9" (OuterVolumeSpecName: "kube-api-access-9stq9") pod "066b2b93-4946-44cf-9757-05c8282cb7a3" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3"). InnerVolumeSpecName "kube-api-access-9stq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.334050 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "066b2b93-4946-44cf-9757-05c8282cb7a3" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.341610 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/066b2b93-4946-44cf-9757-05c8282cb7a3-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "066b2b93-4946-44cf-9757-05c8282cb7a3" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.419833 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9stq9\" (UniqueName: \"kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-kube-api-access-9stq9\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.419863 4593 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.419880 4593 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.419889 4593 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/066b2b93-4946-44cf-9757-05c8282cb7a3-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.419903 4593 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/066b2b93-4946-44cf-9757-05c8282cb7a3-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.419911 4593 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/066b2b93-4946-44cf-9757-05c8282cb7a3-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.419920 4593 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/066b2b93-4946-44cf-9757-05c8282cb7a3-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.567107 4593 generic.go:334] "Generic (PLEG): container finished" podID="066b2b93-4946-44cf-9757-05c8282cb7a3" containerID="b0eae5ecd0f07f39d4a301805b28646763eb88458f87677425443839cbdb4cd3" exitCode=0 Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.567148 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" event={"ID":"066b2b93-4946-44cf-9757-05c8282cb7a3","Type":"ContainerDied","Data":"b0eae5ecd0f07f39d4a301805b28646763eb88458f87677425443839cbdb4cd3"} Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.567176 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" event={"ID":"066b2b93-4946-44cf-9757-05c8282cb7a3","Type":"ContainerDied","Data":"fb99d447e5189720ac881b538d20b70d4e3aef55d12b3a424d01a9dc39152640"} Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.567191 4593 scope.go:117] "RemoveContainer" containerID="b0eae5ecd0f07f39d4a301805b28646763eb88458f87677425443839cbdb4cd3" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.567217 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.568403 4593 status_manager.go:851] "Failed to get status for pod" podUID="c78186dc-c8e4-4018-8e50-f7fc0e719890" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.568824 4593 status_manager.go:851] "Failed to get status for pod" podUID="954251cb-5bea-456e-8d36-27eda2fe92d6" pod="openshift-marketplace/redhat-operators-vbjtl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vbjtl\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.569205 4593 status_manager.go:851] "Failed to get status for pod" podUID="066b2b93-4946-44cf-9757-05c8282cb7a3" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-697d97f7c8-g72zl\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.569787 4593 status_manager.go:851] "Failed to get status for pod" podUID="69a313ce-b443-4080-9eea-bde0c61dc59d" pod="openshift-marketplace/redhat-marketplace-v2f96" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-v2f96\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.570285 4593 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.585091 4593 status_manager.go:851] "Failed to get status for pod" podUID="954251cb-5bea-456e-8d36-27eda2fe92d6" pod="openshift-marketplace/redhat-operators-vbjtl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vbjtl\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.585698 4593 status_manager.go:851] "Failed to get status for pod" podUID="066b2b93-4946-44cf-9757-05c8282cb7a3" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-697d97f7c8-g72zl\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.586364 4593 status_manager.go:851] "Failed to get status for pod" podUID="69a313ce-b443-4080-9eea-bde0c61dc59d" pod="openshift-marketplace/redhat-marketplace-v2f96" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-v2f96\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.586745 4593 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.587035 4593 status_manager.go:851] "Failed to get status for pod" podUID="c78186dc-c8e4-4018-8e50-f7fc0e719890" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.589034 4593 scope.go:117] "RemoveContainer" containerID="b0eae5ecd0f07f39d4a301805b28646763eb88458f87677425443839cbdb4cd3" Jan 29 11:03:48 crc kubenswrapper[4593]: E0129 11:03:48.589485 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0eae5ecd0f07f39d4a301805b28646763eb88458f87677425443839cbdb4cd3\": container with ID starting with b0eae5ecd0f07f39d4a301805b28646763eb88458f87677425443839cbdb4cd3 not found: ID does not exist" containerID="b0eae5ecd0f07f39d4a301805b28646763eb88458f87677425443839cbdb4cd3" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.589539 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0eae5ecd0f07f39d4a301805b28646763eb88458f87677425443839cbdb4cd3"} err="failed to get container status \"b0eae5ecd0f07f39d4a301805b28646763eb88458f87677425443839cbdb4cd3\": rpc error: code = NotFound desc = could not find container \"b0eae5ecd0f07f39d4a301805b28646763eb88458f87677425443839cbdb4cd3\": container with ID starting with b0eae5ecd0f07f39d4a301805b28646763eb88458f87677425443839cbdb4cd3 not found: ID does not exist" Jan 29 11:03:49 crc kubenswrapper[4593]: E0129 11:03:49.771352 4593 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="3.2s" Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.074565 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.076418 4593 status_manager.go:851] "Failed to get status for pod" podUID="c78186dc-c8e4-4018-8e50-f7fc0e719890" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.076857 4593 status_manager.go:851] "Failed to get status for pod" podUID="954251cb-5bea-456e-8d36-27eda2fe92d6" pod="openshift-marketplace/redhat-operators-vbjtl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vbjtl\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.077584 4593 status_manager.go:851] "Failed to get status for pod" podUID="066b2b93-4946-44cf-9757-05c8282cb7a3" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-697d97f7c8-g72zl\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.079816 4593 status_manager.go:851] "Failed to get status for pod" podUID="69a313ce-b443-4080-9eea-bde0c61dc59d" pod="openshift-marketplace/redhat-marketplace-v2f96" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-v2f96\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.080436 4593 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.092458 4593 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b28ebaa7-bd83-4239-8d22-71b82cdc8d0a" Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.092501 4593 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b28ebaa7-bd83-4239-8d22-71b82cdc8d0a" Jan 29 11:03:51 crc kubenswrapper[4593]: E0129 11:03:51.093120 4593 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.093880 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.593862 4593 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="f9f02392ece426d45bf04eadcad66ef551bcb96420b397c2e95276ccec2b5800" exitCode=0 Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.593956 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"f9f02392ece426d45bf04eadcad66ef551bcb96420b397c2e95276ccec2b5800"} Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.594266 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7bc9277c29f0ea4f90bc30c23c8fafde6d0cd08135ba10b6c6165096d15d8a7a"} Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.594798 4593 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b28ebaa7-bd83-4239-8d22-71b82cdc8d0a" Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.594853 4593 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b28ebaa7-bd83-4239-8d22-71b82cdc8d0a" Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.595412 4593 status_manager.go:851] "Failed to get status for pod" podUID="954251cb-5bea-456e-8d36-27eda2fe92d6" pod="openshift-marketplace/redhat-operators-vbjtl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vbjtl\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:51 crc kubenswrapper[4593]: E0129 11:03:51.595441 4593 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.596181 4593 status_manager.go:851] "Failed to get status for pod" podUID="066b2b93-4946-44cf-9757-05c8282cb7a3" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-697d97f7c8-g72zl\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.596623 4593 status_manager.go:851] "Failed to get status for pod" podUID="69a313ce-b443-4080-9eea-bde0c61dc59d" pod="openshift-marketplace/redhat-marketplace-v2f96" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-v2f96\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.598484 4593 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.599056 4593 status_manager.go:851] "Failed to get status for pod" podUID="c78186dc-c8e4-4018-8e50-f7fc0e719890" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:52 crc kubenswrapper[4593]: I0129 11:03:52.605108 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 11:03:52 crc kubenswrapper[4593]: I0129 11:03:52.605332 4593 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0" exitCode=1 Jan 29 11:03:52 crc kubenswrapper[4593]: I0129 11:03:52.605374 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0"} Jan 29 11:03:52 crc kubenswrapper[4593]: I0129 11:03:52.605987 4593 scope.go:117] "RemoveContainer" containerID="3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0" Jan 29 11:03:52 crc kubenswrapper[4593]: I0129 11:03:52.616445 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"26499e684473ff4ac9eb0dedbbff033965a500a9a4276cf5a92c08e9fe64f96b"} Jan 29 11:03:52 crc kubenswrapper[4593]: I0129 11:03:52.616489 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1e6cf6f03687af6fbd5d29111ddbdaf274a7444ef5a36c54e812c6bc4d6bcf4b"} Jan 29 11:03:52 crc kubenswrapper[4593]: I0129 11:03:52.616499 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"aa647d83dabe5a3d79a19930063128a6f909621f5d5c41375de40be266f096f9"} Jan 29 11:03:53 crc kubenswrapper[4593]: I0129 11:03:53.626540 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 11:03:53 crc kubenswrapper[4593]: I0129 11:03:53.626616 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fb630f127dad9c772aa1b0d91c47433e7de976de011fabe9ef8cc269850f92de"} Jan 29 11:03:53 crc kubenswrapper[4593]: I0129 11:03:53.631471 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3cc85a8397a00dc41754449a054d8846ba6e9208d885de111d0af2960e7ea73b"} Jan 29 11:03:53 crc kubenswrapper[4593]: I0129 11:03:53.631514 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"53ba891dd20f4bbede831110f88317be0b1cb520878389c5750aedc2c2db2b51"} Jan 29 11:03:53 crc kubenswrapper[4593]: I0129 11:03:53.631806 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:53 crc kubenswrapper[4593]: I0129 11:03:53.631812 4593 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b28ebaa7-bd83-4239-8d22-71b82cdc8d0a" Jan 29 11:03:53 crc kubenswrapper[4593]: I0129 11:03:53.631870 4593 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b28ebaa7-bd83-4239-8d22-71b82cdc8d0a" Jan 29 11:03:54 crc kubenswrapper[4593]: I0129 11:03:54.262849 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 11:03:54 crc kubenswrapper[4593]: I0129 11:03:54.668454 4593 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 29 11:03:56 crc kubenswrapper[4593]: I0129 11:03:56.094932 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:56 crc kubenswrapper[4593]: I0129 11:03:56.094985 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:56 crc kubenswrapper[4593]: I0129 11:03:56.100360 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:58 crc kubenswrapper[4593]: I0129 11:03:58.772849 4593 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:58 crc kubenswrapper[4593]: I0129 11:03:58.968255 4593 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="a227a50d-4a52-4999-b737-d4a81267b353" Jan 29 11:03:59 crc kubenswrapper[4593]: I0129 11:03:59.674363 4593 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b28ebaa7-bd83-4239-8d22-71b82cdc8d0a" Jan 29 11:03:59 crc kubenswrapper[4593]: I0129 11:03:59.674396 4593 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b28ebaa7-bd83-4239-8d22-71b82cdc8d0a" Jan 29 11:03:59 crc kubenswrapper[4593]: I0129 11:03:59.678201 4593 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="a227a50d-4a52-4999-b737-d4a81267b353" Jan 29 11:03:59 crc kubenswrapper[4593]: I0129 11:03:59.678558 4593 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://aa647d83dabe5a3d79a19930063128a6f909621f5d5c41375de40be266f096f9" Jan 29 11:03:59 crc kubenswrapper[4593]: I0129 11:03:59.678586 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:04:00 crc kubenswrapper[4593]: I0129 11:04:00.117787 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 11:04:00 crc kubenswrapper[4593]: I0129 11:04:00.122430 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 11:04:00 crc kubenswrapper[4593]: I0129 11:04:00.678461 4593 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b28ebaa7-bd83-4239-8d22-71b82cdc8d0a" Jan 29 11:04:00 crc kubenswrapper[4593]: I0129 11:04:00.679130 4593 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b28ebaa7-bd83-4239-8d22-71b82cdc8d0a" Jan 29 11:04:00 crc kubenswrapper[4593]: I0129 11:04:00.684299 4593 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="a227a50d-4a52-4999-b737-d4a81267b353" Jan 29 11:04:04 crc kubenswrapper[4593]: I0129 11:04:04.254794 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 11:04:06 crc kubenswrapper[4593]: I0129 11:04:06.157202 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 11:04:07 crc kubenswrapper[4593]: I0129 11:04:07.850035 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 11:04:08 crc kubenswrapper[4593]: I0129 11:04:08.499171 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 11:04:08 crc kubenswrapper[4593]: I0129 11:04:08.522676 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 11:04:08 crc kubenswrapper[4593]: I0129 11:04:08.675454 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 11:04:08 crc kubenswrapper[4593]: I0129 11:04:08.856835 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 11:04:08 crc kubenswrapper[4593]: I0129 11:04:08.897774 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 11:04:09 crc kubenswrapper[4593]: I0129 11:04:09.115113 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 11:04:09 crc kubenswrapper[4593]: I0129 11:04:09.735360 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 11:04:09 crc kubenswrapper[4593]: I0129 11:04:09.769902 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 11:04:10 crc kubenswrapper[4593]: I0129 11:04:10.302318 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 11:04:10 crc kubenswrapper[4593]: I0129 11:04:10.818490 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 11:04:11 crc kubenswrapper[4593]: I0129 11:04:11.037520 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 11:04:11 crc kubenswrapper[4593]: I0129 11:04:11.039910 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 11:04:11 crc kubenswrapper[4593]: I0129 11:04:11.329921 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 11:04:11 crc kubenswrapper[4593]: I0129 11:04:11.339712 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 11:04:11 crc kubenswrapper[4593]: I0129 11:04:11.433420 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 11:04:11 crc kubenswrapper[4593]: I0129 11:04:11.488552 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 11:04:11 crc kubenswrapper[4593]: I0129 11:04:11.494104 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 11:04:11 crc kubenswrapper[4593]: I0129 11:04:11.738238 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 11:04:11 crc kubenswrapper[4593]: I0129 11:04:11.793103 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 11:04:12 crc kubenswrapper[4593]: I0129 11:04:12.013498 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 11:04:12 crc kubenswrapper[4593]: I0129 11:04:12.356879 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 11:04:12 crc kubenswrapper[4593]: I0129 11:04:12.390475 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 11:04:12 crc kubenswrapper[4593]: I0129 11:04:12.469974 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 11:04:12 crc kubenswrapper[4593]: I0129 11:04:12.475573 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 11:04:12 crc kubenswrapper[4593]: I0129 11:04:12.520900 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 11:04:12 crc kubenswrapper[4593]: I0129 11:04:12.599373 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 11:04:12 crc kubenswrapper[4593]: I0129 11:04:12.784511 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 11:04:13 crc kubenswrapper[4593]: I0129 11:04:13.159643 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 11:04:13 crc kubenswrapper[4593]: I0129 11:04:13.369401 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 11:04:13 crc kubenswrapper[4593]: I0129 11:04:13.486454 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 11:04:13 crc kubenswrapper[4593]: I0129 11:04:13.582573 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 11:04:13 crc kubenswrapper[4593]: I0129 11:04:13.587812 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 11:04:13 crc kubenswrapper[4593]: I0129 11:04:13.641998 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 11:04:13 crc kubenswrapper[4593]: I0129 11:04:13.647115 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 11:04:13 crc kubenswrapper[4593]: I0129 11:04:13.830901 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 11:04:13 crc kubenswrapper[4593]: I0129 11:04:13.913412 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 11:04:14 crc kubenswrapper[4593]: I0129 11:04:14.152753 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 11:04:14 crc kubenswrapper[4593]: I0129 11:04:14.165777 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 11:04:14 crc kubenswrapper[4593]: I0129 11:04:14.249185 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 11:04:14 crc kubenswrapper[4593]: I0129 11:04:14.260991 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 11:04:14 crc kubenswrapper[4593]: I0129 11:04:14.311548 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 11:04:14 crc kubenswrapper[4593]: I0129 11:04:14.376013 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 11:04:14 crc kubenswrapper[4593]: I0129 11:04:14.386461 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 11:04:14 crc kubenswrapper[4593]: I0129 11:04:14.484623 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 11:04:14 crc kubenswrapper[4593]: I0129 11:04:14.502797 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 11:04:14 crc kubenswrapper[4593]: I0129 11:04:14.623935 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 11:04:14 crc kubenswrapper[4593]: I0129 11:04:14.663683 4593 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 11:04:14 crc kubenswrapper[4593]: I0129 11:04:14.718990 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 11:04:14 crc kubenswrapper[4593]: I0129 11:04:14.741961 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 11:04:14 crc kubenswrapper[4593]: I0129 11:04:14.802675 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 11:04:14 crc kubenswrapper[4593]: I0129 11:04:14.867104 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.001857 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.052692 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.064608 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.071819 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.112869 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.131380 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.149669 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.181590 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.207303 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.241155 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.348657 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.375030 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.398095 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.404353 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.448838 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.471253 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.479280 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.536961 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.561201 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.579594 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.681546 4593 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.686391 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=36.686367028 podStartE2EDuration="36.686367028s" podCreationTimestamp="2026-01-29 11:03:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:03:58.917876055 +0000 UTC m=+304.790910246" watchObservedRunningTime="2026-01-29 11:04:15.686367028 +0000 UTC m=+321.559401209" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.689077 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-g72zl","openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.689270 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.694692 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.707065 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=17.70704616 podStartE2EDuration="17.70704616s" podCreationTimestamp="2026-01-29 11:03:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:04:15.706243848 +0000 UTC m=+321.579278059" watchObservedRunningTime="2026-01-29 11:04:15.70704616 +0000 UTC m=+321.580080351" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.723962 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.778843 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.780936 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.888010 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.994994 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.003409 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.069554 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.119730 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.332113 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.422673 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.465396 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.477539 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.516060 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.525918 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.606215 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.680864 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.752253 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.762336 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.770932 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.829724 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.896992 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.931265 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.017759 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.040227 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.044682 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.081496 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="066b2b93-4946-44cf-9757-05c8282cb7a3" path="/var/lib/kubelet/pods/066b2b93-4946-44cf-9757-05c8282cb7a3/volumes" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.117851 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.183813 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.199548 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.278852 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.314468 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.343146 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.367905 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.410580 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.581095 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.624505 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.662564 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.669546 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.693568 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.847098 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.912878 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.003911 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.010196 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.031037 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.064219 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.266688 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.288028 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.292075 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.302106 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.398714 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.497765 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.545338 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.662066 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.696931 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.755921 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.761445 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.805724 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.886338 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.915245 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.927159 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.930922 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.023241 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.028184 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.159192 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.163008 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.177304 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.212103 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.251654 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.254501 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.315244 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.325430 4593 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.374738 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.390820 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.396674 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.429712 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.451063 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.533155 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.591200 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.651542 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.653365 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.710515 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.731814 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.769763 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.778667 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.816606 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.819000 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.867981 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.883164 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.931836 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.014760 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.046774 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.167573 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.185621 4593 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.199469 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.220565 4593 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.220857 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://1b0b3411fc4372f421b034e112b06a82b1bf3bfcd9f80166476dda55319b85fa" gracePeriod=5 Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.262671 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.423271 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.438235 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.449498 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.470326 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.471174 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.487974 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.532136 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.543578 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.597173 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.614993 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.653942 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.691574 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.723744 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.724744 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.758896 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.865082 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.952214 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.008655 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.044764 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.086672 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.087560 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.137151 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.151616 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.223403 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.235925 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.326824 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.365220 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.410360 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.515794 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.547146 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.580245 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.721850 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.837497 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.872450 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.882378 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.129028 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.140301 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.140946 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.148697 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.201128 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.205126 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.255596 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.310062 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.317897 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.318077 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.548626 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.554895 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.590960 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.621213 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.774240 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.860402 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.897607 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 11:04:23 crc kubenswrapper[4593]: I0129 11:04:23.038296 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 11:04:23 crc kubenswrapper[4593]: I0129 11:04:23.046734 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 11:04:23 crc kubenswrapper[4593]: I0129 11:04:23.069715 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 11:04:23 crc kubenswrapper[4593]: I0129 11:04:23.102438 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 11:04:23 crc kubenswrapper[4593]: I0129 11:04:23.198724 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 11:04:23 crc kubenswrapper[4593]: I0129 11:04:23.478211 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 11:04:23 crc kubenswrapper[4593]: I0129 11:04:23.688597 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 11:04:23 crc kubenswrapper[4593]: I0129 11:04:23.715777 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 11:04:23 crc kubenswrapper[4593]: I0129 11:04:23.741509 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 11:04:23 crc kubenswrapper[4593]: I0129 11:04:23.904942 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 11:04:23 crc kubenswrapper[4593]: I0129 11:04:23.992328 4593 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 11:04:24 crc kubenswrapper[4593]: I0129 11:04:24.163081 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 11:04:24 crc kubenswrapper[4593]: I0129 11:04:24.167161 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 11:04:24 crc kubenswrapper[4593]: I0129 11:04:24.320051 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 11:04:24 crc kubenswrapper[4593]: I0129 11:04:24.348411 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 11:04:24 crc kubenswrapper[4593]: I0129 11:04:24.486831 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 11:04:24 crc kubenswrapper[4593]: I0129 11:04:24.696064 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.015242 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.099477 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.188195 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.616183 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.671308 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.814459 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.814553 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.836923 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.871566 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.871690 4593 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="1b0b3411fc4372f421b034e112b06a82b1bf3bfcd9f80166476dda55319b85fa" exitCode=137 Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.871730 4593 scope.go:117] "RemoveContainer" containerID="1b0b3411fc4372f421b034e112b06a82b1bf3bfcd9f80166476dda55319b85fa" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.871785 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.885844 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.885886 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.885922 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.885976 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.886013 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.885972 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.886017 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.886083 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.886091 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.886467 4593 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.886502 4593 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.886514 4593 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.886527 4593 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.894763 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.894898 4593 scope.go:117] "RemoveContainer" containerID="1b0b3411fc4372f421b034e112b06a82b1bf3bfcd9f80166476dda55319b85fa" Jan 29 11:04:25 crc kubenswrapper[4593]: E0129 11:04:25.895470 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b0b3411fc4372f421b034e112b06a82b1bf3bfcd9f80166476dda55319b85fa\": container with ID starting with 1b0b3411fc4372f421b034e112b06a82b1bf3bfcd9f80166476dda55319b85fa not found: ID does not exist" containerID="1b0b3411fc4372f421b034e112b06a82b1bf3bfcd9f80166476dda55319b85fa" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.895508 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b0b3411fc4372f421b034e112b06a82b1bf3bfcd9f80166476dda55319b85fa"} err="failed to get container status \"1b0b3411fc4372f421b034e112b06a82b1bf3bfcd9f80166476dda55319b85fa\": rpc error: code = NotFound desc = could not find container \"1b0b3411fc4372f421b034e112b06a82b1bf3bfcd9f80166476dda55319b85fa\": container with ID starting with 1b0b3411fc4372f421b034e112b06a82b1bf3bfcd9f80166476dda55319b85fa not found: ID does not exist" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.988222 4593 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 11:04:26 crc kubenswrapper[4593]: I0129 11:04:26.273116 4593 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 11:04:26 crc kubenswrapper[4593]: I0129 11:04:26.637676 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 11:04:26 crc kubenswrapper[4593]: I0129 11:04:26.663626 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 11:04:26 crc kubenswrapper[4593]: I0129 11:04:26.686003 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 11:04:26 crc kubenswrapper[4593]: I0129 11:04:26.756959 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 11:04:26 crc kubenswrapper[4593]: I0129 11:04:26.779115 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 11:04:26 crc kubenswrapper[4593]: I0129 11:04:26.929750 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 11:04:27 crc kubenswrapper[4593]: I0129 11:04:27.082017 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 29 11:04:27 crc kubenswrapper[4593]: I0129 11:04:27.083337 4593 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 29 11:04:27 crc kubenswrapper[4593]: I0129 11:04:27.094228 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 11:04:27 crc kubenswrapper[4593]: I0129 11:04:27.094302 4593 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="bfa76c00-a5b7-488b-b870-4e20971ef9ad" Jan 29 11:04:27 crc kubenswrapper[4593]: I0129 11:04:27.099220 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 11:04:27 crc kubenswrapper[4593]: I0129 11:04:27.099255 4593 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="bfa76c00-a5b7-488b-b870-4e20971ef9ad" Jan 29 11:04:27 crc kubenswrapper[4593]: I0129 11:04:27.174773 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 11:04:27 crc kubenswrapper[4593]: I0129 11:04:27.377976 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 11:05:03 crc kubenswrapper[4593]: I0129 11:05:03.946086 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:05:03 crc kubenswrapper[4593]: I0129 11:05:03.946608 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:05:33 crc kubenswrapper[4593]: I0129 11:05:33.946543 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:05:33 crc kubenswrapper[4593]: I0129 11:05:33.947204 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:06:03 crc kubenswrapper[4593]: I0129 11:06:03.946587 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:06:03 crc kubenswrapper[4593]: I0129 11:06:03.947239 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:06:03 crc kubenswrapper[4593]: I0129 11:06:03.947289 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 11:06:03 crc kubenswrapper[4593]: I0129 11:06:03.948279 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8b86c4fe063da798a93b66c4ff5d4efee81766c3e10d5ae883a58f37ce9f5d50"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:06:03 crc kubenswrapper[4593]: I0129 11:06:03.948337 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://8b86c4fe063da798a93b66c4ff5d4efee81766c3e10d5ae883a58f37ce9f5d50" gracePeriod=600 Jan 29 11:06:04 crc kubenswrapper[4593]: I0129 11:06:04.518699 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="8b86c4fe063da798a93b66c4ff5d4efee81766c3e10d5ae883a58f37ce9f5d50" exitCode=0 Jan 29 11:06:04 crc kubenswrapper[4593]: I0129 11:06:04.518748 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"8b86c4fe063da798a93b66c4ff5d4efee81766c3e10d5ae883a58f37ce9f5d50"} Jan 29 11:06:04 crc kubenswrapper[4593]: I0129 11:06:04.519118 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"ad7eaa6d8b75487d2b1860d56574f3e98a7f997d74c38ceba49998dcdb20364d"} Jan 29 11:06:04 crc kubenswrapper[4593]: I0129 11:06:04.519150 4593 scope.go:117] "RemoveContainer" containerID="85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.531124 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-t7s4r"] Jan 29 11:08:22 crc kubenswrapper[4593]: E0129 11:08:22.532895 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="066b2b93-4946-44cf-9757-05c8282cb7a3" containerName="registry" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.533036 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="066b2b93-4946-44cf-9757-05c8282cb7a3" containerName="registry" Jan 29 11:08:22 crc kubenswrapper[4593]: E0129 11:08:22.533144 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.533237 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 11:08:22 crc kubenswrapper[4593]: E0129 11:08:22.533329 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c78186dc-c8e4-4018-8e50-f7fc0e719890" containerName="installer" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.533407 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="c78186dc-c8e4-4018-8e50-f7fc0e719890" containerName="installer" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.533596 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="066b2b93-4946-44cf-9757-05c8282cb7a3" containerName="registry" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.533704 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.533776 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="c78186dc-c8e4-4018-8e50-f7fc0e719890" containerName="installer" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.534250 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-t7s4r" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.535810 4593 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-g894x" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.536050 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.536174 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.540511 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-lw7j7"] Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.541335 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lw7j7" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.543076 4593 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-zv4cm" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.548260 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-qhfhj"] Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.548907 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-qhfhj" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.551086 4593 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-s8j76" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.554690 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-t7s4r"] Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.558258 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-qhfhj"] Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.565056 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-lw7j7"] Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.578874 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlcsr\" (UniqueName: \"kubernetes.io/projected/79aa2cc5-a031-412d-a4c7-ba9251f84ed6-kube-api-access-qlcsr\") pod \"cert-manager-cainjector-cf98fcc89-lw7j7\" (UID: \"79aa2cc5-a031-412d-a4c7-ba9251f84ed6\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-lw7j7" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.578999 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvnnl\" (UniqueName: \"kubernetes.io/projected/59d387c2-4d0b-4d6c-a0d8-2230657bebd0-kube-api-access-bvnnl\") pod \"cert-manager-858654f9db-qhfhj\" (UID: \"59d387c2-4d0b-4d6c-a0d8-2230657bebd0\") " pod="cert-manager/cert-manager-858654f9db-qhfhj" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.579039 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbbqw\" (UniqueName: \"kubernetes.io/projected/e2b5756a-c46e-4e76-90bf-0a5c7c1dc759-kube-api-access-rbbqw\") pod \"cert-manager-webhook-687f57d79b-t7s4r\" (UID: \"e2b5756a-c46e-4e76-90bf-0a5c7c1dc759\") " pod="cert-manager/cert-manager-webhook-687f57d79b-t7s4r" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.679768 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvnnl\" (UniqueName: \"kubernetes.io/projected/59d387c2-4d0b-4d6c-a0d8-2230657bebd0-kube-api-access-bvnnl\") pod \"cert-manager-858654f9db-qhfhj\" (UID: \"59d387c2-4d0b-4d6c-a0d8-2230657bebd0\") " pod="cert-manager/cert-manager-858654f9db-qhfhj" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.679817 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbbqw\" (UniqueName: \"kubernetes.io/projected/e2b5756a-c46e-4e76-90bf-0a5c7c1dc759-kube-api-access-rbbqw\") pod \"cert-manager-webhook-687f57d79b-t7s4r\" (UID: \"e2b5756a-c46e-4e76-90bf-0a5c7c1dc759\") " pod="cert-manager/cert-manager-webhook-687f57d79b-t7s4r" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.679867 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlcsr\" (UniqueName: \"kubernetes.io/projected/79aa2cc5-a031-412d-a4c7-ba9251f84ed6-kube-api-access-qlcsr\") pod \"cert-manager-cainjector-cf98fcc89-lw7j7\" (UID: \"79aa2cc5-a031-412d-a4c7-ba9251f84ed6\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-lw7j7" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.699264 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbbqw\" (UniqueName: \"kubernetes.io/projected/e2b5756a-c46e-4e76-90bf-0a5c7c1dc759-kube-api-access-rbbqw\") pod \"cert-manager-webhook-687f57d79b-t7s4r\" (UID: \"e2b5756a-c46e-4e76-90bf-0a5c7c1dc759\") " pod="cert-manager/cert-manager-webhook-687f57d79b-t7s4r" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.700368 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvnnl\" (UniqueName: \"kubernetes.io/projected/59d387c2-4d0b-4d6c-a0d8-2230657bebd0-kube-api-access-bvnnl\") pod \"cert-manager-858654f9db-qhfhj\" (UID: \"59d387c2-4d0b-4d6c-a0d8-2230657bebd0\") " pod="cert-manager/cert-manager-858654f9db-qhfhj" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.701485 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlcsr\" (UniqueName: \"kubernetes.io/projected/79aa2cc5-a031-412d-a4c7-ba9251f84ed6-kube-api-access-qlcsr\") pod \"cert-manager-cainjector-cf98fcc89-lw7j7\" (UID: \"79aa2cc5-a031-412d-a4c7-ba9251f84ed6\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-lw7j7" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.857439 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-t7s4r" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.873786 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lw7j7" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.910004 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-qhfhj" Jan 29 11:08:23 crc kubenswrapper[4593]: I0129 11:08:23.143164 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-lw7j7"] Jan 29 11:08:23 crc kubenswrapper[4593]: I0129 11:08:23.157869 4593 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 11:08:23 crc kubenswrapper[4593]: I0129 11:08:23.223520 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-qhfhj"] Jan 29 11:08:23 crc kubenswrapper[4593]: I0129 11:08:23.235087 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lw7j7" event={"ID":"79aa2cc5-a031-412d-a4c7-ba9251f84ed6","Type":"ContainerStarted","Data":"1f5e72b8c35ebdaacdd09ea8ad8f6ceabc567826281d7b1c121b99d0d05a125d"} Jan 29 11:08:23 crc kubenswrapper[4593]: I0129 11:08:23.372625 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-t7s4r"] Jan 29 11:08:23 crc kubenswrapper[4593]: W0129 11:08:23.375223 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2b5756a_c46e_4e76_90bf_0a5c7c1dc759.slice/crio-6a0775c711ee74827909fd2c77d03c0743ccd6d20f9b74aa3332bf7d4b167510 WatchSource:0}: Error finding container 6a0775c711ee74827909fd2c77d03c0743ccd6d20f9b74aa3332bf7d4b167510: Status 404 returned error can't find the container with id 6a0775c711ee74827909fd2c77d03c0743ccd6d20f9b74aa3332bf7d4b167510 Jan 29 11:08:24 crc kubenswrapper[4593]: I0129 11:08:24.242207 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-t7s4r" event={"ID":"e2b5756a-c46e-4e76-90bf-0a5c7c1dc759","Type":"ContainerStarted","Data":"6a0775c711ee74827909fd2c77d03c0743ccd6d20f9b74aa3332bf7d4b167510"} Jan 29 11:08:24 crc kubenswrapper[4593]: I0129 11:08:24.244497 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-qhfhj" event={"ID":"59d387c2-4d0b-4d6c-a0d8-2230657bebd0","Type":"ContainerStarted","Data":"a7122287ba47f87676bebb1341fd9e131c0312f6a879f094c01013f66ecc40f3"} Jan 29 11:08:29 crc kubenswrapper[4593]: I0129 11:08:29.274651 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lw7j7" event={"ID":"79aa2cc5-a031-412d-a4c7-ba9251f84ed6","Type":"ContainerStarted","Data":"fd32d1d4a6d4706c4b7b8e0f3bc1d0422b7f1d9effaa3079f5a32565bc21c54c"} Jan 29 11:08:29 crc kubenswrapper[4593]: I0129 11:08:29.276525 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-t7s4r" event={"ID":"e2b5756a-c46e-4e76-90bf-0a5c7c1dc759","Type":"ContainerStarted","Data":"7a6a7ee7ba6871741addb1938c5349767fcbe78536de29c611ba973ba8800f3b"} Jan 29 11:08:29 crc kubenswrapper[4593]: I0129 11:08:29.276615 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-t7s4r" Jan 29 11:08:29 crc kubenswrapper[4593]: I0129 11:08:29.277906 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-qhfhj" event={"ID":"59d387c2-4d0b-4d6c-a0d8-2230657bebd0","Type":"ContainerStarted","Data":"31c0c240e391114a8b6f567a9d4aca5053c83f18bae943a421ee9339284d814c"} Jan 29 11:08:29 crc kubenswrapper[4593]: I0129 11:08:29.290855 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lw7j7" podStartSLOduration=1.909984184 podStartE2EDuration="7.290832841s" podCreationTimestamp="2026-01-29 11:08:22 +0000 UTC" firstStartedPulling="2026-01-29 11:08:23.15751877 +0000 UTC m=+569.030552961" lastFinishedPulling="2026-01-29 11:08:28.538367417 +0000 UTC m=+574.411401618" observedRunningTime="2026-01-29 11:08:29.287747448 +0000 UTC m=+575.160781639" watchObservedRunningTime="2026-01-29 11:08:29.290832841 +0000 UTC m=+575.163867032" Jan 29 11:08:29 crc kubenswrapper[4593]: I0129 11:08:29.309238 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-qhfhj" podStartSLOduration=1.989634353 podStartE2EDuration="7.309214047s" podCreationTimestamp="2026-01-29 11:08:22 +0000 UTC" firstStartedPulling="2026-01-29 11:08:23.237536869 +0000 UTC m=+569.110571060" lastFinishedPulling="2026-01-29 11:08:28.557116563 +0000 UTC m=+574.430150754" observedRunningTime="2026-01-29 11:08:29.308227311 +0000 UTC m=+575.181261502" watchObservedRunningTime="2026-01-29 11:08:29.309214047 +0000 UTC m=+575.182248238" Jan 29 11:08:29 crc kubenswrapper[4593]: I0129 11:08:29.338665 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-t7s4r" podStartSLOduration=2.071969894 podStartE2EDuration="7.338645741s" podCreationTimestamp="2026-01-29 11:08:22 +0000 UTC" firstStartedPulling="2026-01-29 11:08:23.378008489 +0000 UTC m=+569.251042680" lastFinishedPulling="2026-01-29 11:08:28.644684336 +0000 UTC m=+574.517718527" observedRunningTime="2026-01-29 11:08:29.33710283 +0000 UTC m=+575.210137021" watchObservedRunningTime="2026-01-29 11:08:29.338645741 +0000 UTC m=+575.211679952" Jan 29 11:08:31 crc kubenswrapper[4593]: I0129 11:08:31.869802 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vmt7l"] Jan 29 11:08:31 crc kubenswrapper[4593]: I0129 11:08:31.870177 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovn-controller" containerID="cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c" gracePeriod=30 Jan 29 11:08:31 crc kubenswrapper[4593]: I0129 11:08:31.870278 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="nbdb" containerID="cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9" gracePeriod=30 Jan 29 11:08:31 crc kubenswrapper[4593]: I0129 11:08:31.870304 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8" gracePeriod=30 Jan 29 11:08:31 crc kubenswrapper[4593]: I0129 11:08:31.870340 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="sbdb" containerID="cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da" gracePeriod=30 Jan 29 11:08:31 crc kubenswrapper[4593]: I0129 11:08:31.870380 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="northd" containerID="cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a" gracePeriod=30 Jan 29 11:08:31 crc kubenswrapper[4593]: I0129 11:08:31.870473 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="kube-rbac-proxy-node" containerID="cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539" gracePeriod=30 Jan 29 11:08:31 crc kubenswrapper[4593]: I0129 11:08:31.870456 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovn-acl-logging" containerID="cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990" gracePeriod=30 Jan 29 11:08:31 crc kubenswrapper[4593]: I0129 11:08:31.914119 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovnkube-controller" containerID="cri-o://a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2" gracePeriod=30 Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.208387 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovnkube-controller/3.log" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.211716 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovn-acl-logging/0.log" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.212146 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovn-controller/0.log" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.212566 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264274 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-sm9pl"] Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.264534 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovnkube-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264549 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovnkube-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.264559 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="northd" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264567 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="northd" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.264579 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="kube-rbac-proxy-node" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264589 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="kube-rbac-proxy-node" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.264602 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="nbdb" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264609 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="nbdb" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.264621 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovnkube-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264694 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovnkube-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.264704 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovnkube-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264713 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovnkube-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.264724 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="sbdb" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264733 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="sbdb" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.264743 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264751 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.264764 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="kubecfg-setup" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264772 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="kubecfg-setup" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.264780 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovn-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264789 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovn-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.264803 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovn-acl-logging" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264811 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovn-acl-logging" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.264821 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovnkube-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264829 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovnkube-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264952 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="nbdb" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264963 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovn-acl-logging" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264975 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovnkube-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264984 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="sbdb" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264994 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovnkube-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.265002 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovnkube-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.265013 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="northd" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.265025 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="kube-rbac-proxy-node" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.265046 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovn-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.265057 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.265165 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovnkube-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.265175 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovnkube-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.265278 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovnkube-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.265291 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovnkube-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.267356 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.302102 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xpt4q_c76afd0b-36c6-4faa-9278-c08c60c483e9/kube-multus/2.log" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.303947 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xpt4q_c76afd0b-36c6-4faa-9278-c08c60c483e9/kube-multus/1.log" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.303990 4593 generic.go:334] "Generic (PLEG): container finished" podID="c76afd0b-36c6-4faa-9278-c08c60c483e9" containerID="7088fbdf7ae2d9a3c27696c6ec34c0f98abb36e3618af2948ac923c1d6097be2" exitCode=2 Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.304050 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xpt4q" event={"ID":"c76afd0b-36c6-4faa-9278-c08c60c483e9","Type":"ContainerDied","Data":"7088fbdf7ae2d9a3c27696c6ec34c0f98abb36e3618af2948ac923c1d6097be2"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.304097 4593 scope.go:117] "RemoveContainer" containerID="ac51835cf1f007b8725bb86c71b27b6fbe4bdd808b94072ef83e847d22d1f117" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.304673 4593 scope.go:117] "RemoveContainer" containerID="7088fbdf7ae2d9a3c27696c6ec34c0f98abb36e3618af2948ac923c1d6097be2" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.305192 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-xpt4q_openshift-multus(c76afd0b-36c6-4faa-9278-c08c60c483e9)\"" pod="openshift-multus/multus-xpt4q" podUID="c76afd0b-36c6-4faa-9278-c08c60c483e9" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.307118 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovnkube-controller/3.log" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.309623 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovn-acl-logging/0.log" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310125 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovn-controller/0.log" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310505 4593 generic.go:334] "Generic (PLEG): container finished" podID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerID="a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2" exitCode=0 Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310529 4593 generic.go:334] "Generic (PLEG): container finished" podID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerID="83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da" exitCode=0 Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310538 4593 generic.go:334] "Generic (PLEG): container finished" podID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerID="0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9" exitCode=0 Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310545 4593 generic.go:334] "Generic (PLEG): container finished" podID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerID="e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a" exitCode=0 Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310555 4593 generic.go:334] "Generic (PLEG): container finished" podID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerID="469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8" exitCode=0 Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310562 4593 generic.go:334] "Generic (PLEG): container finished" podID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerID="4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539" exitCode=0 Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310570 4593 generic.go:334] "Generic (PLEG): container finished" podID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerID="0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990" exitCode=143 Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310577 4593 generic.go:334] "Generic (PLEG): container finished" podID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerID="5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c" exitCode=143 Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310593 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerDied","Data":"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310615 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerDied","Data":"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310625 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerDied","Data":"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310648 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerDied","Data":"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310657 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerDied","Data":"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310667 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerDied","Data":"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310677 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310686 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310692 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310697 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310702 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310707 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310712 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310717 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310722 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310727 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310734 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerDied","Data":"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310741 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310748 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310753 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310758 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310763 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310768 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310772 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310778 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310784 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310789 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310795 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerDied","Data":"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310802 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310808 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310815 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310820 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310825 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310831 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310863 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310873 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310878 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310883 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310891 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerDied","Data":"1f4d4677f9da87318adb658a3d5c60bf8ae9dd156ada23706892dfb2a3940ad7"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310903 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310908 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310917 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310922 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310927 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310932 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310937 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310942 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310947 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310951 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.311023 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.333232 4593 scope.go:117] "RemoveContainer" containerID="a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.337768 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfpld\" (UniqueName: \"kubernetes.io/projected/943b00a1-4aae-4054-b4fd-dc512fe58270-kube-api-access-jfpld\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.337834 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-ovnkube-config\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.337857 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-env-overrides\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.337883 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-run-netns\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.337909 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-slash\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.337929 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-ovn\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.337964 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-node-log\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.337988 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-var-lib-cni-networks-ovn-kubernetes\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338018 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-cni-netd\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338047 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/943b00a1-4aae-4054-b4fd-dc512fe58270-ovn-node-metrics-cert\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338080 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-systemd\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338101 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-run-ovn-kubernetes\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338145 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-ovnkube-script-lib\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338202 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-kubelet\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338293 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-openvswitch\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338320 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-systemd-units\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338348 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-var-lib-openvswitch\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338370 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-log-socket\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338398 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-cni-bin\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338424 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-etc-openvswitch\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338577 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-etc-openvswitch\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338611 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-log-socket\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338654 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvp7r\" (UniqueName: \"kubernetes.io/projected/cc84611e-9a00-45a5-b761-0911d9b47bf7-kube-api-access-bvp7r\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338703 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-run-ovn\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338737 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338762 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-systemd-units\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338787 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-cni-netd\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338810 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-kubelet\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338833 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-cni-bin\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338968 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338978 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.339000 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.339007 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.339034 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-log-socket" (OuterVolumeSpecName: "log-socket") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.339040 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.339067 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-node-log" (OuterVolumeSpecName: "node-log") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.339284 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.339431 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.339758 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.339785 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.339809 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.339828 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.339848 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-slash" (OuterVolumeSpecName: "host-slash") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.339847 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.339866 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.339886 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340107 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-run-ovn-kubernetes\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340147 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cc84611e-9a00-45a5-b761-0911d9b47bf7-ovnkube-config\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340166 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-run-systemd\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340185 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cc84611e-9a00-45a5-b761-0911d9b47bf7-ovn-node-metrics-cert\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340212 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-run-netns\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340227 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-node-log\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340312 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-run-openvswitch\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340410 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-var-lib-openvswitch\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340440 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cc84611e-9a00-45a5-b761-0911d9b47bf7-env-overrides\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340755 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cc84611e-9a00-45a5-b761-0911d9b47bf7-ovnkube-script-lib\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340801 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-slash\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340892 4593 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340908 4593 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-slash\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340922 4593 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340934 4593 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-node-log\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340947 4593 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340972 4593 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340987 4593 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.341002 4593 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.341013 4593 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.341024 4593 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.341034 4593 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.341045 4593 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.341056 4593 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-log-socket\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.341068 4593 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.341082 4593 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.341092 4593 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.341102 4593 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.344777 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/943b00a1-4aae-4054-b4fd-dc512fe58270-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.345994 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/943b00a1-4aae-4054-b4fd-dc512fe58270-kube-api-access-jfpld" (OuterVolumeSpecName: "kube-api-access-jfpld") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "kube-api-access-jfpld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.350477 4593 scope.go:117] "RemoveContainer" containerID="faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.351404 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.366304 4593 scope.go:117] "RemoveContainer" containerID="83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.381408 4593 scope.go:117] "RemoveContainer" containerID="0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.393509 4593 scope.go:117] "RemoveContainer" containerID="e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.404871 4593 scope.go:117] "RemoveContainer" containerID="469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.421680 4593 scope.go:117] "RemoveContainer" containerID="4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.440508 4593 scope.go:117] "RemoveContainer" containerID="0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.441754 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-run-ovn\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.441801 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.441806 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-run-ovn\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.441832 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-systemd-units\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.441852 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-cni-netd\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.441852 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-systemd-units\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.441831 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.441925 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-cni-netd\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.441965 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-kubelet\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442003 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-cni-bin\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442049 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-kubelet\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442046 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-run-ovn-kubernetes\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442087 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-run-ovn-kubernetes\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442131 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cc84611e-9a00-45a5-b761-0911d9b47bf7-ovnkube-config\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442164 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-run-systemd\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442187 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cc84611e-9a00-45a5-b761-0911d9b47bf7-ovn-node-metrics-cert\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442262 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-run-systemd\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442234 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-run-netns\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442308 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-node-log\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442335 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-run-openvswitch\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442359 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-node-log\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442364 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cc84611e-9a00-45a5-b761-0911d9b47bf7-env-overrides\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442382 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-run-openvswitch\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442385 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-var-lib-openvswitch\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442371 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-run-netns\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442407 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cc84611e-9a00-45a5-b761-0911d9b47bf7-ovnkube-script-lib\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442421 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-var-lib-openvswitch\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442428 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-slash\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442445 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-etc-openvswitch\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442466 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-log-socket\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442535 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-etc-openvswitch\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442582 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-slash\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442605 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-log-socket\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442625 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvp7r\" (UniqueName: \"kubernetes.io/projected/cc84611e-9a00-45a5-b761-0911d9b47bf7-kube-api-access-bvp7r\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.443055 4593 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/943b00a1-4aae-4054-b4fd-dc512fe58270-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.443069 4593 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.443078 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfpld\" (UniqueName: \"kubernetes.io/projected/943b00a1-4aae-4054-b4fd-dc512fe58270-kube-api-access-jfpld\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.443011 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cc84611e-9a00-45a5-b761-0911d9b47bf7-env-overrides\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.443167 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cc84611e-9a00-45a5-b761-0911d9b47bf7-ovnkube-config\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442148 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-cni-bin\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.443207 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cc84611e-9a00-45a5-b761-0911d9b47bf7-ovnkube-script-lib\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.446440 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cc84611e-9a00-45a5-b761-0911d9b47bf7-ovn-node-metrics-cert\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.458053 4593 scope.go:117] "RemoveContainer" containerID="5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.458623 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvp7r\" (UniqueName: \"kubernetes.io/projected/cc84611e-9a00-45a5-b761-0911d9b47bf7-kube-api-access-bvp7r\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.483321 4593 scope.go:117] "RemoveContainer" containerID="a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.509421 4593 scope.go:117] "RemoveContainer" containerID="a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.509948 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2\": container with ID starting with a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2 not found: ID does not exist" containerID="a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.510007 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2"} err="failed to get container status \"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2\": rpc error: code = NotFound desc = could not find container \"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2\": container with ID starting with a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.510039 4593 scope.go:117] "RemoveContainer" containerID="faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.510327 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\": container with ID starting with faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27 not found: ID does not exist" containerID="faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.510355 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27"} err="failed to get container status \"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\": rpc error: code = NotFound desc = could not find container \"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\": container with ID starting with faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.510374 4593 scope.go:117] "RemoveContainer" containerID="83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.510600 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\": container with ID starting with 83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da not found: ID does not exist" containerID="83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.510625 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da"} err="failed to get container status \"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\": rpc error: code = NotFound desc = could not find container \"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\": container with ID starting with 83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.510691 4593 scope.go:117] "RemoveContainer" containerID="0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.511828 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\": container with ID starting with 0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9 not found: ID does not exist" containerID="0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.511867 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9"} err="failed to get container status \"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\": rpc error: code = NotFound desc = could not find container \"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\": container with ID starting with 0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.511894 4593 scope.go:117] "RemoveContainer" containerID="e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.512436 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\": container with ID starting with e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a not found: ID does not exist" containerID="e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.512468 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a"} err="failed to get container status \"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\": rpc error: code = NotFound desc = could not find container \"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\": container with ID starting with e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.512486 4593 scope.go:117] "RemoveContainer" containerID="469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.512972 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\": container with ID starting with 469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8 not found: ID does not exist" containerID="469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.512991 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8"} err="failed to get container status \"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\": rpc error: code = NotFound desc = could not find container \"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\": container with ID starting with 469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.513006 4593 scope.go:117] "RemoveContainer" containerID="4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.513679 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\": container with ID starting with 4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539 not found: ID does not exist" containerID="4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.513736 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539"} err="failed to get container status \"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\": rpc error: code = NotFound desc = could not find container \"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\": container with ID starting with 4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.513766 4593 scope.go:117] "RemoveContainer" containerID="0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.514094 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\": container with ID starting with 0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990 not found: ID does not exist" containerID="0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.514120 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990"} err="failed to get container status \"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\": rpc error: code = NotFound desc = could not find container \"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\": container with ID starting with 0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.514143 4593 scope.go:117] "RemoveContainer" containerID="5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.514476 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\": container with ID starting with 5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c not found: ID does not exist" containerID="5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.514503 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c"} err="failed to get container status \"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\": rpc error: code = NotFound desc = could not find container \"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\": container with ID starting with 5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.514520 4593 scope.go:117] "RemoveContainer" containerID="a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.514863 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\": container with ID starting with a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9 not found: ID does not exist" containerID="a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.514907 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9"} err="failed to get container status \"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\": rpc error: code = NotFound desc = could not find container \"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\": container with ID starting with a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.514929 4593 scope.go:117] "RemoveContainer" containerID="a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.515220 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2"} err="failed to get container status \"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2\": rpc error: code = NotFound desc = could not find container \"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2\": container with ID starting with a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.515243 4593 scope.go:117] "RemoveContainer" containerID="faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.515488 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27"} err="failed to get container status \"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\": rpc error: code = NotFound desc = could not find container \"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\": container with ID starting with faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.515519 4593 scope.go:117] "RemoveContainer" containerID="83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.515866 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da"} err="failed to get container status \"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\": rpc error: code = NotFound desc = could not find container \"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\": container with ID starting with 83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.515890 4593 scope.go:117] "RemoveContainer" containerID="0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.516098 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9"} err="failed to get container status \"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\": rpc error: code = NotFound desc = could not find container \"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\": container with ID starting with 0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.516116 4593 scope.go:117] "RemoveContainer" containerID="e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.516317 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a"} err="failed to get container status \"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\": rpc error: code = NotFound desc = could not find container \"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\": container with ID starting with e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.516339 4593 scope.go:117] "RemoveContainer" containerID="469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.516521 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8"} err="failed to get container status \"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\": rpc error: code = NotFound desc = could not find container \"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\": container with ID starting with 469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.516537 4593 scope.go:117] "RemoveContainer" containerID="4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.517390 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539"} err="failed to get container status \"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\": rpc error: code = NotFound desc = could not find container \"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\": container with ID starting with 4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.517423 4593 scope.go:117] "RemoveContainer" containerID="0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.517715 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990"} err="failed to get container status \"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\": rpc error: code = NotFound desc = could not find container \"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\": container with ID starting with 0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.517763 4593 scope.go:117] "RemoveContainer" containerID="5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.518041 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c"} err="failed to get container status \"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\": rpc error: code = NotFound desc = could not find container \"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\": container with ID starting with 5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.518068 4593 scope.go:117] "RemoveContainer" containerID="a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.518364 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9"} err="failed to get container status \"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\": rpc error: code = NotFound desc = could not find container \"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\": container with ID starting with a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.518392 4593 scope.go:117] "RemoveContainer" containerID="a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.518734 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2"} err="failed to get container status \"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2\": rpc error: code = NotFound desc = could not find container \"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2\": container with ID starting with a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.518771 4593 scope.go:117] "RemoveContainer" containerID="faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.519025 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27"} err="failed to get container status \"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\": rpc error: code = NotFound desc = could not find container \"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\": container with ID starting with faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.519050 4593 scope.go:117] "RemoveContainer" containerID="83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.519334 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da"} err="failed to get container status \"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\": rpc error: code = NotFound desc = could not find container \"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\": container with ID starting with 83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.519370 4593 scope.go:117] "RemoveContainer" containerID="0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.519655 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9"} err="failed to get container status \"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\": rpc error: code = NotFound desc = could not find container \"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\": container with ID starting with 0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.519684 4593 scope.go:117] "RemoveContainer" containerID="e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.519919 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a"} err="failed to get container status \"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\": rpc error: code = NotFound desc = could not find container \"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\": container with ID starting with e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.519939 4593 scope.go:117] "RemoveContainer" containerID="469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.520144 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8"} err="failed to get container status \"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\": rpc error: code = NotFound desc = could not find container \"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\": container with ID starting with 469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.520183 4593 scope.go:117] "RemoveContainer" containerID="4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.520438 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539"} err="failed to get container status \"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\": rpc error: code = NotFound desc = could not find container \"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\": container with ID starting with 4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.520460 4593 scope.go:117] "RemoveContainer" containerID="0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.520758 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990"} err="failed to get container status \"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\": rpc error: code = NotFound desc = could not find container \"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\": container with ID starting with 0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.520776 4593 scope.go:117] "RemoveContainer" containerID="5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.521015 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c"} err="failed to get container status \"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\": rpc error: code = NotFound desc = could not find container \"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\": container with ID starting with 5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.521042 4593 scope.go:117] "RemoveContainer" containerID="a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.521267 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9"} err="failed to get container status \"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\": rpc error: code = NotFound desc = could not find container \"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\": container with ID starting with a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.521309 4593 scope.go:117] "RemoveContainer" containerID="a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.521573 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2"} err="failed to get container status \"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2\": rpc error: code = NotFound desc = could not find container \"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2\": container with ID starting with a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.521600 4593 scope.go:117] "RemoveContainer" containerID="faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.521858 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27"} err="failed to get container status \"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\": rpc error: code = NotFound desc = could not find container \"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\": container with ID starting with faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.521901 4593 scope.go:117] "RemoveContainer" containerID="83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.522137 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da"} err="failed to get container status \"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\": rpc error: code = NotFound desc = could not find container \"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\": container with ID starting with 83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.522181 4593 scope.go:117] "RemoveContainer" containerID="0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.522440 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9"} err="failed to get container status \"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\": rpc error: code = NotFound desc = could not find container \"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\": container with ID starting with 0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.522463 4593 scope.go:117] "RemoveContainer" containerID="e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.522719 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a"} err="failed to get container status \"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\": rpc error: code = NotFound desc = could not find container \"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\": container with ID starting with e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.522740 4593 scope.go:117] "RemoveContainer" containerID="469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.522994 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8"} err="failed to get container status \"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\": rpc error: code = NotFound desc = could not find container \"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\": container with ID starting with 469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.523024 4593 scope.go:117] "RemoveContainer" containerID="4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.523331 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539"} err="failed to get container status \"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\": rpc error: code = NotFound desc = could not find container \"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\": container with ID starting with 4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.523355 4593 scope.go:117] "RemoveContainer" containerID="0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.523660 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990"} err="failed to get container status \"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\": rpc error: code = NotFound desc = could not find container \"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\": container with ID starting with 0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.523695 4593 scope.go:117] "RemoveContainer" containerID="5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.523959 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c"} err="failed to get container status \"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\": rpc error: code = NotFound desc = could not find container \"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\": container with ID starting with 5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.523995 4593 scope.go:117] "RemoveContainer" containerID="a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.524261 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9"} err="failed to get container status \"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\": rpc error: code = NotFound desc = could not find container \"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\": container with ID starting with a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.581983 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.660090 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vmt7l"] Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.682459 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vmt7l"] Jan 29 11:08:33 crc kubenswrapper[4593]: I0129 11:08:33.085681 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" path="/var/lib/kubelet/pods/943b00a1-4aae-4054-b4fd-dc512fe58270/volumes" Jan 29 11:08:33 crc kubenswrapper[4593]: I0129 11:08:33.318230 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xpt4q_c76afd0b-36c6-4faa-9278-c08c60c483e9/kube-multus/2.log" Jan 29 11:08:33 crc kubenswrapper[4593]: I0129 11:08:33.320429 4593 generic.go:334] "Generic (PLEG): container finished" podID="cc84611e-9a00-45a5-b761-0911d9b47bf7" containerID="f5e3aad0c41912236686e6faf67844bb6d1c37fd275fa0c9fbe20bc6ecc870ac" exitCode=0 Jan 29 11:08:33 crc kubenswrapper[4593]: I0129 11:08:33.320464 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" event={"ID":"cc84611e-9a00-45a5-b761-0911d9b47bf7","Type":"ContainerDied","Data":"f5e3aad0c41912236686e6faf67844bb6d1c37fd275fa0c9fbe20bc6ecc870ac"} Jan 29 11:08:33 crc kubenswrapper[4593]: I0129 11:08:33.320491 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" event={"ID":"cc84611e-9a00-45a5-b761-0911d9b47bf7","Type":"ContainerStarted","Data":"e9d5e0c6cc806c8771b09bb971ba4bbc96484d6bad3775a48792cf313915f9b0"} Jan 29 11:08:33 crc kubenswrapper[4593]: I0129 11:08:33.946529 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:08:33 crc kubenswrapper[4593]: I0129 11:08:33.946899 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:08:34 crc kubenswrapper[4593]: I0129 11:08:34.328602 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" event={"ID":"cc84611e-9a00-45a5-b761-0911d9b47bf7","Type":"ContainerStarted","Data":"9da1c9cfa819caebcf5cfdb280d6a2bc6fe9be20c94cd1c294a21f55b262846f"} Jan 29 11:08:34 crc kubenswrapper[4593]: I0129 11:08:34.328929 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" event={"ID":"cc84611e-9a00-45a5-b761-0911d9b47bf7","Type":"ContainerStarted","Data":"f54f192beea231a88d26245e44e66f275a07e443f5bc6916b7349f0cbac7b999"} Jan 29 11:08:34 crc kubenswrapper[4593]: I0129 11:08:34.328939 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" event={"ID":"cc84611e-9a00-45a5-b761-0911d9b47bf7","Type":"ContainerStarted","Data":"a5b6b60224d98739e9c06366973644f92aad41da241024e41b74bb0d575a6fc3"} Jan 29 11:08:34 crc kubenswrapper[4593]: I0129 11:08:34.328948 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" event={"ID":"cc84611e-9a00-45a5-b761-0911d9b47bf7","Type":"ContainerStarted","Data":"dc3edd3f9345d17646f4fe4918cecf6778a7963b909fb243d304e748fbf03451"} Jan 29 11:08:34 crc kubenswrapper[4593]: I0129 11:08:34.328956 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" event={"ID":"cc84611e-9a00-45a5-b761-0911d9b47bf7","Type":"ContainerStarted","Data":"51082c1388d07f2cb08f551e99213d987d08b24bc6e484e9810db2912ad174cd"} Jan 29 11:08:34 crc kubenswrapper[4593]: I0129 11:08:34.328964 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" event={"ID":"cc84611e-9a00-45a5-b761-0911d9b47bf7","Type":"ContainerStarted","Data":"6a01ae98ea5a6d2abb2c27f744261bd5225d22b0977678fe0a4b97d6db62b63a"} Jan 29 11:08:36 crc kubenswrapper[4593]: I0129 11:08:36.352096 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" event={"ID":"cc84611e-9a00-45a5-b761-0911d9b47bf7","Type":"ContainerStarted","Data":"1242e3161aaf4e2337474d78cb73c623d0a9f71c9c91b7f1425ff3c57ecebdaa"} Jan 29 11:08:37 crc kubenswrapper[4593]: I0129 11:08:37.860716 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-t7s4r" Jan 29 11:08:39 crc kubenswrapper[4593]: I0129 11:08:39.411991 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" event={"ID":"cc84611e-9a00-45a5-b761-0911d9b47bf7","Type":"ContainerStarted","Data":"ea2e3c096d1ef81526f242930773ddc338cdda6d2069f1da109ac54e38291144"} Jan 29 11:08:39 crc kubenswrapper[4593]: I0129 11:08:39.413752 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:39 crc kubenswrapper[4593]: I0129 11:08:39.413810 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:39 crc kubenswrapper[4593]: I0129 11:08:39.413850 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:39 crc kubenswrapper[4593]: I0129 11:08:39.444030 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:39 crc kubenswrapper[4593]: I0129 11:08:39.452348 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:39 crc kubenswrapper[4593]: I0129 11:08:39.480790 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" podStartSLOduration=7.480774496 podStartE2EDuration="7.480774496s" podCreationTimestamp="2026-01-29 11:08:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:08:39.446522911 +0000 UTC m=+585.319557092" watchObservedRunningTime="2026-01-29 11:08:39.480774496 +0000 UTC m=+585.353808687" Jan 29 11:08:45 crc kubenswrapper[4593]: I0129 11:08:45.082874 4593 scope.go:117] "RemoveContainer" containerID="7088fbdf7ae2d9a3c27696c6ec34c0f98abb36e3618af2948ac923c1d6097be2" Jan 29 11:08:45 crc kubenswrapper[4593]: E0129 11:08:45.084354 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-xpt4q_openshift-multus(c76afd0b-36c6-4faa-9278-c08c60c483e9)\"" pod="openshift-multus/multus-xpt4q" podUID="c76afd0b-36c6-4faa-9278-c08c60c483e9" Jan 29 11:09:00 crc kubenswrapper[4593]: I0129 11:09:00.074721 4593 scope.go:117] "RemoveContainer" containerID="7088fbdf7ae2d9a3c27696c6ec34c0f98abb36e3618af2948ac923c1d6097be2" Jan 29 11:09:00 crc kubenswrapper[4593]: I0129 11:09:00.373661 4593 scope.go:117] "RemoveContainer" containerID="56d5157444e050b6f16a3cd3db852cdaa6435ef728d9605dbdd7a7adb3a64e51" Jan 29 11:09:00 crc kubenswrapper[4593]: I0129 11:09:00.560757 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xpt4q_c76afd0b-36c6-4faa-9278-c08c60c483e9/kube-multus/2.log" Jan 29 11:09:00 crc kubenswrapper[4593]: I0129 11:09:00.560825 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xpt4q" event={"ID":"c76afd0b-36c6-4faa-9278-c08c60c483e9","Type":"ContainerStarted","Data":"7ce67f1a579e52aa9e2e4d4f4f4d42ee734442d1f408d335f8fbb4182b8ca8ba"} Jan 29 11:09:02 crc kubenswrapper[4593]: I0129 11:09:02.660462 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:09:03 crc kubenswrapper[4593]: I0129 11:09:03.946464 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:09:03 crc kubenswrapper[4593]: I0129 11:09:03.947152 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:09:21 crc kubenswrapper[4593]: I0129 11:09:21.399656 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w"] Jan 29 11:09:21 crc kubenswrapper[4593]: I0129 11:09:21.401140 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" Jan 29 11:09:21 crc kubenswrapper[4593]: I0129 11:09:21.404074 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 11:09:21 crc kubenswrapper[4593]: I0129 11:09:21.425708 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w"] Jan 29 11:09:21 crc kubenswrapper[4593]: I0129 11:09:21.512350 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b514f100-8029-41bf-9315-9e8c18a7238a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w\" (UID: \"b514f100-8029-41bf-9315-9e8c18a7238a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" Jan 29 11:09:21 crc kubenswrapper[4593]: I0129 11:09:21.512666 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b514f100-8029-41bf-9315-9e8c18a7238a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w\" (UID: \"b514f100-8029-41bf-9315-9e8c18a7238a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" Jan 29 11:09:21 crc kubenswrapper[4593]: I0129 11:09:21.512822 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dks2m\" (UniqueName: \"kubernetes.io/projected/b514f100-8029-41bf-9315-9e8c18a7238a-kube-api-access-dks2m\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w\" (UID: \"b514f100-8029-41bf-9315-9e8c18a7238a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" Jan 29 11:09:21 crc kubenswrapper[4593]: I0129 11:09:21.613930 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b514f100-8029-41bf-9315-9e8c18a7238a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w\" (UID: \"b514f100-8029-41bf-9315-9e8c18a7238a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" Jan 29 11:09:21 crc kubenswrapper[4593]: I0129 11:09:21.614014 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b514f100-8029-41bf-9315-9e8c18a7238a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w\" (UID: \"b514f100-8029-41bf-9315-9e8c18a7238a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" Jan 29 11:09:21 crc kubenswrapper[4593]: I0129 11:09:21.614085 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dks2m\" (UniqueName: \"kubernetes.io/projected/b514f100-8029-41bf-9315-9e8c18a7238a-kube-api-access-dks2m\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w\" (UID: \"b514f100-8029-41bf-9315-9e8c18a7238a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" Jan 29 11:09:21 crc kubenswrapper[4593]: I0129 11:09:21.614870 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b514f100-8029-41bf-9315-9e8c18a7238a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w\" (UID: \"b514f100-8029-41bf-9315-9e8c18a7238a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" Jan 29 11:09:21 crc kubenswrapper[4593]: I0129 11:09:21.615134 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b514f100-8029-41bf-9315-9e8c18a7238a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w\" (UID: \"b514f100-8029-41bf-9315-9e8c18a7238a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" Jan 29 11:09:21 crc kubenswrapper[4593]: I0129 11:09:21.638975 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dks2m\" (UniqueName: \"kubernetes.io/projected/b514f100-8029-41bf-9315-9e8c18a7238a-kube-api-access-dks2m\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w\" (UID: \"b514f100-8029-41bf-9315-9e8c18a7238a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" Jan 29 11:09:21 crc kubenswrapper[4593]: I0129 11:09:21.719051 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" Jan 29 11:09:21 crc kubenswrapper[4593]: I0129 11:09:21.914572 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w"] Jan 29 11:09:22 crc kubenswrapper[4593]: I0129 11:09:22.679865 4593 generic.go:334] "Generic (PLEG): container finished" podID="b514f100-8029-41bf-9315-9e8c18a7238a" containerID="78c3759864d05d7d19be3b0d83ed871900e54c8183aab376b46a43c128e076f2" exitCode=0 Jan 29 11:09:22 crc kubenswrapper[4593]: I0129 11:09:22.681029 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" event={"ID":"b514f100-8029-41bf-9315-9e8c18a7238a","Type":"ContainerDied","Data":"78c3759864d05d7d19be3b0d83ed871900e54c8183aab376b46a43c128e076f2"} Jan 29 11:09:22 crc kubenswrapper[4593]: I0129 11:09:22.681138 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" event={"ID":"b514f100-8029-41bf-9315-9e8c18a7238a","Type":"ContainerStarted","Data":"359e2a1cd8d457cda64b56ce97afa8c8155194f23f4dad2b817bd5760fa136f3"} Jan 29 11:09:24 crc kubenswrapper[4593]: I0129 11:09:24.693145 4593 generic.go:334] "Generic (PLEG): container finished" podID="b514f100-8029-41bf-9315-9e8c18a7238a" containerID="f480d4bff3158dd2da88ac217ce006fa0885868606782266869d93440be1913a" exitCode=0 Jan 29 11:09:24 crc kubenswrapper[4593]: I0129 11:09:24.693195 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" event={"ID":"b514f100-8029-41bf-9315-9e8c18a7238a","Type":"ContainerDied","Data":"f480d4bff3158dd2da88ac217ce006fa0885868606782266869d93440be1913a"} Jan 29 11:09:25 crc kubenswrapper[4593]: I0129 11:09:25.701526 4593 generic.go:334] "Generic (PLEG): container finished" podID="b514f100-8029-41bf-9315-9e8c18a7238a" containerID="849838256ca3a590bbf121bdb5fd48f8450f87eb5499fb4dcc356b159271a2c8" exitCode=0 Jan 29 11:09:25 crc kubenswrapper[4593]: I0129 11:09:25.701623 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" event={"ID":"b514f100-8029-41bf-9315-9e8c18a7238a","Type":"ContainerDied","Data":"849838256ca3a590bbf121bdb5fd48f8450f87eb5499fb4dcc356b159271a2c8"} Jan 29 11:09:26 crc kubenswrapper[4593]: I0129 11:09:26.930115 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" Jan 29 11:09:27 crc kubenswrapper[4593]: I0129 11:09:27.083199 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b514f100-8029-41bf-9315-9e8c18a7238a-util\") pod \"b514f100-8029-41bf-9315-9e8c18a7238a\" (UID: \"b514f100-8029-41bf-9315-9e8c18a7238a\") " Jan 29 11:09:27 crc kubenswrapper[4593]: I0129 11:09:27.083661 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dks2m\" (UniqueName: \"kubernetes.io/projected/b514f100-8029-41bf-9315-9e8c18a7238a-kube-api-access-dks2m\") pod \"b514f100-8029-41bf-9315-9e8c18a7238a\" (UID: \"b514f100-8029-41bf-9315-9e8c18a7238a\") " Jan 29 11:09:27 crc kubenswrapper[4593]: I0129 11:09:27.083706 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b514f100-8029-41bf-9315-9e8c18a7238a-bundle\") pod \"b514f100-8029-41bf-9315-9e8c18a7238a\" (UID: \"b514f100-8029-41bf-9315-9e8c18a7238a\") " Jan 29 11:09:27 crc kubenswrapper[4593]: I0129 11:09:27.084333 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b514f100-8029-41bf-9315-9e8c18a7238a-bundle" (OuterVolumeSpecName: "bundle") pod "b514f100-8029-41bf-9315-9e8c18a7238a" (UID: "b514f100-8029-41bf-9315-9e8c18a7238a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:09:27 crc kubenswrapper[4593]: I0129 11:09:27.090830 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b514f100-8029-41bf-9315-9e8c18a7238a-kube-api-access-dks2m" (OuterVolumeSpecName: "kube-api-access-dks2m") pod "b514f100-8029-41bf-9315-9e8c18a7238a" (UID: "b514f100-8029-41bf-9315-9e8c18a7238a"). InnerVolumeSpecName "kube-api-access-dks2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:09:27 crc kubenswrapper[4593]: I0129 11:09:27.114078 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b514f100-8029-41bf-9315-9e8c18a7238a-util" (OuterVolumeSpecName: "util") pod "b514f100-8029-41bf-9315-9e8c18a7238a" (UID: "b514f100-8029-41bf-9315-9e8c18a7238a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:09:27 crc kubenswrapper[4593]: I0129 11:09:27.185219 4593 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b514f100-8029-41bf-9315-9e8c18a7238a-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:09:27 crc kubenswrapper[4593]: I0129 11:09:27.185290 4593 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b514f100-8029-41bf-9315-9e8c18a7238a-util\") on node \"crc\" DevicePath \"\"" Jan 29 11:09:27 crc kubenswrapper[4593]: I0129 11:09:27.185316 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dks2m\" (UniqueName: \"kubernetes.io/projected/b514f100-8029-41bf-9315-9e8c18a7238a-kube-api-access-dks2m\") on node \"crc\" DevicePath \"\"" Jan 29 11:09:27 crc kubenswrapper[4593]: I0129 11:09:27.723143 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" event={"ID":"b514f100-8029-41bf-9315-9e8c18a7238a","Type":"ContainerDied","Data":"359e2a1cd8d457cda64b56ce97afa8c8155194f23f4dad2b817bd5760fa136f3"} Jan 29 11:09:27 crc kubenswrapper[4593]: I0129 11:09:27.723191 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="359e2a1cd8d457cda64b56ce97afa8c8155194f23f4dad2b817bd5760fa136f3" Jan 29 11:09:27 crc kubenswrapper[4593]: I0129 11:09:27.723262 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" Jan 29 11:09:29 crc kubenswrapper[4593]: I0129 11:09:29.081934 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-xmhmc"] Jan 29 11:09:29 crc kubenswrapper[4593]: E0129 11:09:29.082146 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b514f100-8029-41bf-9315-9e8c18a7238a" containerName="extract" Jan 29 11:09:29 crc kubenswrapper[4593]: I0129 11:09:29.082161 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="b514f100-8029-41bf-9315-9e8c18a7238a" containerName="extract" Jan 29 11:09:29 crc kubenswrapper[4593]: E0129 11:09:29.082207 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b514f100-8029-41bf-9315-9e8c18a7238a" containerName="pull" Jan 29 11:09:29 crc kubenswrapper[4593]: I0129 11:09:29.082216 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="b514f100-8029-41bf-9315-9e8c18a7238a" containerName="pull" Jan 29 11:09:29 crc kubenswrapper[4593]: E0129 11:09:29.082233 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b514f100-8029-41bf-9315-9e8c18a7238a" containerName="util" Jan 29 11:09:29 crc kubenswrapper[4593]: I0129 11:09:29.082240 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="b514f100-8029-41bf-9315-9e8c18a7238a" containerName="util" Jan 29 11:09:29 crc kubenswrapper[4593]: I0129 11:09:29.082361 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="b514f100-8029-41bf-9315-9e8c18a7238a" containerName="extract" Jan 29 11:09:29 crc kubenswrapper[4593]: I0129 11:09:29.082872 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-xmhmc" Jan 29 11:09:29 crc kubenswrapper[4593]: I0129 11:09:29.084798 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-q8kdv" Jan 29 11:09:29 crc kubenswrapper[4593]: I0129 11:09:29.085176 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 29 11:09:29 crc kubenswrapper[4593]: I0129 11:09:29.086142 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 29 11:09:29 crc kubenswrapper[4593]: I0129 11:09:29.100780 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-xmhmc"] Jan 29 11:09:29 crc kubenswrapper[4593]: I0129 11:09:29.218872 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnhb6\" (UniqueName: \"kubernetes.io/projected/b2e0c4ff-8a2b-474d-8414-a0026d61b11e-kube-api-access-gnhb6\") pod \"nmstate-operator-646758c888-xmhmc\" (UID: \"b2e0c4ff-8a2b-474d-8414-a0026d61b11e\") " pod="openshift-nmstate/nmstate-operator-646758c888-xmhmc" Jan 29 11:09:29 crc kubenswrapper[4593]: I0129 11:09:29.320417 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnhb6\" (UniqueName: \"kubernetes.io/projected/b2e0c4ff-8a2b-474d-8414-a0026d61b11e-kube-api-access-gnhb6\") pod \"nmstate-operator-646758c888-xmhmc\" (UID: \"b2e0c4ff-8a2b-474d-8414-a0026d61b11e\") " pod="openshift-nmstate/nmstate-operator-646758c888-xmhmc" Jan 29 11:09:29 crc kubenswrapper[4593]: I0129 11:09:29.340580 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnhb6\" (UniqueName: \"kubernetes.io/projected/b2e0c4ff-8a2b-474d-8414-a0026d61b11e-kube-api-access-gnhb6\") pod \"nmstate-operator-646758c888-xmhmc\" (UID: \"b2e0c4ff-8a2b-474d-8414-a0026d61b11e\") " pod="openshift-nmstate/nmstate-operator-646758c888-xmhmc" Jan 29 11:09:29 crc kubenswrapper[4593]: I0129 11:09:29.415777 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-xmhmc" Jan 29 11:09:29 crc kubenswrapper[4593]: I0129 11:09:29.805431 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-xmhmc"] Jan 29 11:09:30 crc kubenswrapper[4593]: I0129 11:09:30.749848 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-xmhmc" event={"ID":"b2e0c4ff-8a2b-474d-8414-a0026d61b11e","Type":"ContainerStarted","Data":"8a90ec6bf0ce834b124e82cbdf4240d6d6ecbbea28bf5beecbf453e216277260"} Jan 29 11:09:32 crc kubenswrapper[4593]: I0129 11:09:32.762319 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-xmhmc" event={"ID":"b2e0c4ff-8a2b-474d-8414-a0026d61b11e","Type":"ContainerStarted","Data":"82b6af78fede5e003fb41379fe5c96489cc9d4eb683404d4585a103f844a7dbf"} Jan 29 11:09:32 crc kubenswrapper[4593]: I0129 11:09:32.784521 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-xmhmc" podStartSLOduration=1.765230715 podStartE2EDuration="3.784505064s" podCreationTimestamp="2026-01-29 11:09:29 +0000 UTC" firstStartedPulling="2026-01-29 11:09:29.822443203 +0000 UTC m=+635.695477404" lastFinishedPulling="2026-01-29 11:09:31.841717562 +0000 UTC m=+637.714751753" observedRunningTime="2026-01-29 11:09:32.780367843 +0000 UTC m=+638.653402034" watchObservedRunningTime="2026-01-29 11:09:32.784505064 +0000 UTC m=+638.657539255" Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.760362 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-q2995"] Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.761999 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-q2995" Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.764810 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-mffj6" Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.781658 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-q2995"] Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.811549 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46"] Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.824711 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46" Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.837425 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.892244 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46"] Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.909705 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-q2lbc"] Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.909910 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lnxw\" (UniqueName: \"kubernetes.io/projected/72d4f068-dc20-44d0-aca6-c8f0992536e6-kube-api-access-2lnxw\") pod \"nmstate-webhook-8474b5b9d8-47n46\" (UID: \"72d4f068-dc20-44d0-aca6-c8f0992536e6\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46" Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.909960 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/72d4f068-dc20-44d0-aca6-c8f0992536e6-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-47n46\" (UID: \"72d4f068-dc20-44d0-aca6-c8f0992536e6\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46" Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.910005 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n25d\" (UniqueName: \"kubernetes.io/projected/7a32568f-244c-432b-8186-683e8bc10371-kube-api-access-4n25d\") pod \"nmstate-metrics-54757c584b-q2995\" (UID: \"7a32568f-244c-432b-8186-683e8bc10371\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-q2995" Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.910534 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-q2lbc" Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.946460 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.946516 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.946598 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.952781 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ad7eaa6d8b75487d2b1860d56574f3e98a7f997d74c38ceba49998dcdb20364d"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.952872 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://ad7eaa6d8b75487d2b1860d56574f3e98a7f997d74c38ceba49998dcdb20364d" gracePeriod=600 Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.011217 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ea391d24-e32c-440b-b5c2-218920192125-nmstate-lock\") pod \"nmstate-handler-q2lbc\" (UID: \"ea391d24-e32c-440b-b5c2-218920192125\") " pod="openshift-nmstate/nmstate-handler-q2lbc" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.011287 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ea391d24-e32c-440b-b5c2-218920192125-dbus-socket\") pod \"nmstate-handler-q2lbc\" (UID: \"ea391d24-e32c-440b-b5c2-218920192125\") " pod="openshift-nmstate/nmstate-handler-q2lbc" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.011312 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ea391d24-e32c-440b-b5c2-218920192125-ovs-socket\") pod \"nmstate-handler-q2lbc\" (UID: \"ea391d24-e32c-440b-b5c2-218920192125\") " pod="openshift-nmstate/nmstate-handler-q2lbc" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.011333 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbzkh\" (UniqueName: \"kubernetes.io/projected/ea391d24-e32c-440b-b5c2-218920192125-kube-api-access-dbzkh\") pod \"nmstate-handler-q2lbc\" (UID: \"ea391d24-e32c-440b-b5c2-218920192125\") " pod="openshift-nmstate/nmstate-handler-q2lbc" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.011367 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n25d\" (UniqueName: \"kubernetes.io/projected/7a32568f-244c-432b-8186-683e8bc10371-kube-api-access-4n25d\") pod \"nmstate-metrics-54757c584b-q2995\" (UID: \"7a32568f-244c-432b-8186-683e8bc10371\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-q2995" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.011427 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lnxw\" (UniqueName: \"kubernetes.io/projected/72d4f068-dc20-44d0-aca6-c8f0992536e6-kube-api-access-2lnxw\") pod \"nmstate-webhook-8474b5b9d8-47n46\" (UID: \"72d4f068-dc20-44d0-aca6-c8f0992536e6\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.011461 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/72d4f068-dc20-44d0-aca6-c8f0992536e6-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-47n46\" (UID: \"72d4f068-dc20-44d0-aca6-c8f0992536e6\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46" Jan 29 11:09:34 crc kubenswrapper[4593]: E0129 11:09:34.011567 4593 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 29 11:09:34 crc kubenswrapper[4593]: E0129 11:09:34.011623 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72d4f068-dc20-44d0-aca6-c8f0992536e6-tls-key-pair podName:72d4f068-dc20-44d0-aca6-c8f0992536e6 nodeName:}" failed. No retries permitted until 2026-01-29 11:09:34.511602436 +0000 UTC m=+640.384636657 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/72d4f068-dc20-44d0-aca6-c8f0992536e6-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-47n46" (UID: "72d4f068-dc20-44d0-aca6-c8f0992536e6") : secret "openshift-nmstate-webhook" not found Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.049564 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lnxw\" (UniqueName: \"kubernetes.io/projected/72d4f068-dc20-44d0-aca6-c8f0992536e6-kube-api-access-2lnxw\") pod \"nmstate-webhook-8474b5b9d8-47n46\" (UID: \"72d4f068-dc20-44d0-aca6-c8f0992536e6\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.050389 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n25d\" (UniqueName: \"kubernetes.io/projected/7a32568f-244c-432b-8186-683e8bc10371-kube-api-access-4n25d\") pod \"nmstate-metrics-54757c584b-q2995\" (UID: \"7a32568f-244c-432b-8186-683e8bc10371\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-q2995" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.083042 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-q2995" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.087227 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62"] Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.087974 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.091871 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-cfmdq" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.091945 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.091871 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.115001 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62"] Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.116367 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ea391d24-e32c-440b-b5c2-218920192125-ovs-socket\") pod \"nmstate-handler-q2lbc\" (UID: \"ea391d24-e32c-440b-b5c2-218920192125\") " pod="openshift-nmstate/nmstate-handler-q2lbc" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.116416 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbzkh\" (UniqueName: \"kubernetes.io/projected/ea391d24-e32c-440b-b5c2-218920192125-kube-api-access-dbzkh\") pod \"nmstate-handler-q2lbc\" (UID: \"ea391d24-e32c-440b-b5c2-218920192125\") " pod="openshift-nmstate/nmstate-handler-q2lbc" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.116468 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ea391d24-e32c-440b-b5c2-218920192125-ovs-socket\") pod \"nmstate-handler-q2lbc\" (UID: \"ea391d24-e32c-440b-b5c2-218920192125\") " pod="openshift-nmstate/nmstate-handler-q2lbc" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.116530 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-nck62\" (UID: \"2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.116562 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-nck62\" (UID: \"2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.116677 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ea391d24-e32c-440b-b5c2-218920192125-nmstate-lock\") pod \"nmstate-handler-q2lbc\" (UID: \"ea391d24-e32c-440b-b5c2-218920192125\") " pod="openshift-nmstate/nmstate-handler-q2lbc" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.116754 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvl9n\" (UniqueName: \"kubernetes.io/projected/2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2-kube-api-access-zvl9n\") pod \"nmstate-console-plugin-7754f76f8b-nck62\" (UID: \"2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.116778 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ea391d24-e32c-440b-b5c2-218920192125-dbus-socket\") pod \"nmstate-handler-q2lbc\" (UID: \"ea391d24-e32c-440b-b5c2-218920192125\") " pod="openshift-nmstate/nmstate-handler-q2lbc" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.118551 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ea391d24-e32c-440b-b5c2-218920192125-dbus-socket\") pod \"nmstate-handler-q2lbc\" (UID: \"ea391d24-e32c-440b-b5c2-218920192125\") " pod="openshift-nmstate/nmstate-handler-q2lbc" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.118812 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ea391d24-e32c-440b-b5c2-218920192125-nmstate-lock\") pod \"nmstate-handler-q2lbc\" (UID: \"ea391d24-e32c-440b-b5c2-218920192125\") " pod="openshift-nmstate/nmstate-handler-q2lbc" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.155846 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbzkh\" (UniqueName: \"kubernetes.io/projected/ea391d24-e32c-440b-b5c2-218920192125-kube-api-access-dbzkh\") pod \"nmstate-handler-q2lbc\" (UID: \"ea391d24-e32c-440b-b5c2-218920192125\") " pod="openshift-nmstate/nmstate-handler-q2lbc" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.218009 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-nck62\" (UID: \"2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.218131 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-nck62\" (UID: \"2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.218208 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvl9n\" (UniqueName: \"kubernetes.io/projected/2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2-kube-api-access-zvl9n\") pod \"nmstate-console-plugin-7754f76f8b-nck62\" (UID: \"2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62" Jan 29 11:09:34 crc kubenswrapper[4593]: E0129 11:09:34.218795 4593 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 29 11:09:34 crc kubenswrapper[4593]: E0129 11:09:34.218860 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2-plugin-serving-cert podName:2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2 nodeName:}" failed. No retries permitted until 2026-01-29 11:09:34.718844956 +0000 UTC m=+640.591879147 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-nck62" (UID: "2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2") : secret "plugin-serving-cert" not found Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.219569 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-nck62\" (UID: \"2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.233048 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-q2lbc" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.241075 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvl9n\" (UniqueName: \"kubernetes.io/projected/2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2-kube-api-access-zvl9n\") pod \"nmstate-console-plugin-7754f76f8b-nck62\" (UID: \"2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.343548 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-fdf6c7869-trqgk"] Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.344531 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.380656 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-fdf6c7869-trqgk"] Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.428705 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-trusted-ca-bundle\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.428742 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-console-oauth-config\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.428765 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-console-config\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.428782 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-oauth-serving-cert\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.428801 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-console-serving-cert\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.428848 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8q2r\" (UniqueName: \"kubernetes.io/projected/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-kube-api-access-t8q2r\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.428882 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-service-ca\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.529597 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8q2r\" (UniqueName: \"kubernetes.io/projected/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-kube-api-access-t8q2r\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.529682 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-service-ca\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.529719 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/72d4f068-dc20-44d0-aca6-c8f0992536e6-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-47n46\" (UID: \"72d4f068-dc20-44d0-aca6-c8f0992536e6\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.529733 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-trusted-ca-bundle\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.529750 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-console-oauth-config\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.529769 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-console-config\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.529783 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-oauth-serving-cert\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.529824 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-console-serving-cert\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.531430 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-oauth-serving-cert\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.531484 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-service-ca\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.531892 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-console-config\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.532367 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-trusted-ca-bundle\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.535503 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/72d4f068-dc20-44d0-aca6-c8f0992536e6-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-47n46\" (UID: \"72d4f068-dc20-44d0-aca6-c8f0992536e6\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.537329 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-console-serving-cert\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.540331 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-console-oauth-config\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.551053 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8q2r\" (UniqueName: \"kubernetes.io/projected/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-kube-api-access-t8q2r\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.689233 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-q2995"] Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.689471 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.732213 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-nck62\" (UID: \"2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.740030 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-nck62\" (UID: \"2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.769521 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.804540 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-q2995" event={"ID":"7a32568f-244c-432b-8186-683e8bc10371","Type":"ContainerStarted","Data":"2738cebdbe181dd7e7a77d4d417aa44ce887ceeebde33b3991e01e517f9d3c58"} Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.807351 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-q2lbc" event={"ID":"ea391d24-e32c-440b-b5c2-218920192125","Type":"ContainerStarted","Data":"638e9f8ebc583f0f80f1aee775823876d32225024c79ce43ade20b63e5339ee5"} Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.808554 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.827564 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="ad7eaa6d8b75487d2b1860d56574f3e98a7f997d74c38ceba49998dcdb20364d" exitCode=0 Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.827610 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"ad7eaa6d8b75487d2b1860d56574f3e98a7f997d74c38ceba49998dcdb20364d"} Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.827658 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"61a3ea70115ab5b387eba2a0b23159462567f420ec0f4cfd86c804f4a4ced4d2"} Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.827701 4593 scope.go:117] "RemoveContainer" containerID="8b86c4fe063da798a93b66c4ff5d4efee81766c3e10d5ae883a58f37ce9f5d50" Jan 29 11:09:35 crc kubenswrapper[4593]: I0129 11:09:35.030787 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46"] Jan 29 11:09:35 crc kubenswrapper[4593]: W0129 11:09:35.047484 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72d4f068_dc20_44d0_aca6_c8f0992536e6.slice/crio-890e420c17612155f6d31b57931b665f61bf8fc947fe40113094f1dc6e5745e9 WatchSource:0}: Error finding container 890e420c17612155f6d31b57931b665f61bf8fc947fe40113094f1dc6e5745e9: Status 404 returned error can't find the container with id 890e420c17612155f6d31b57931b665f61bf8fc947fe40113094f1dc6e5745e9 Jan 29 11:09:35 crc kubenswrapper[4593]: I0129 11:09:35.136975 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-fdf6c7869-trqgk"] Jan 29 11:09:35 crc kubenswrapper[4593]: I0129 11:09:35.282464 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62"] Jan 29 11:09:35 crc kubenswrapper[4593]: I0129 11:09:35.840388 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46" event={"ID":"72d4f068-dc20-44d0-aca6-c8f0992536e6","Type":"ContainerStarted","Data":"890e420c17612155f6d31b57931b665f61bf8fc947fe40113094f1dc6e5745e9"} Jan 29 11:09:35 crc kubenswrapper[4593]: I0129 11:09:35.843215 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-fdf6c7869-trqgk" event={"ID":"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce","Type":"ContainerStarted","Data":"b98249628a8681273dfbe20c075f500ca935590bea8450af0bb76b2ae943a69b"} Jan 29 11:09:35 crc kubenswrapper[4593]: I0129 11:09:35.843250 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-fdf6c7869-trqgk" event={"ID":"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce","Type":"ContainerStarted","Data":"a366ca9ac1937b5b282f224c4d5e7b88852693512ece90a53076c9e3d367d71b"} Jan 29 11:09:35 crc kubenswrapper[4593]: I0129 11:09:35.844976 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62" event={"ID":"2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2","Type":"ContainerStarted","Data":"a07a0f2f3cf331172fb02c16c3b93e4ec6354f121700102be5ce3afc89a5c670"} Jan 29 11:09:38 crc kubenswrapper[4593]: I0129 11:09:38.863170 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-q2995" event={"ID":"7a32568f-244c-432b-8186-683e8bc10371","Type":"ContainerStarted","Data":"ef1c9f7f74d586c20da595eba1cc80f73454d87184fdde928e71e187a675253a"} Jan 29 11:09:38 crc kubenswrapper[4593]: I0129 11:09:38.865896 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-q2lbc" event={"ID":"ea391d24-e32c-440b-b5c2-218920192125","Type":"ContainerStarted","Data":"d479d04d33245f40c4d8407da6fee37ccccbf786201e9a41f1574e43ce762d71"} Jan 29 11:09:38 crc kubenswrapper[4593]: I0129 11:09:38.866076 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-q2lbc" Jan 29 11:09:38 crc kubenswrapper[4593]: I0129 11:09:38.869145 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46" event={"ID":"72d4f068-dc20-44d0-aca6-c8f0992536e6","Type":"ContainerStarted","Data":"f2da04d4ea05914c5736faf7c64b996c8715bc2e3f0ae3f19a2b3b24fe89b9b6"} Jan 29 11:09:38 crc kubenswrapper[4593]: I0129 11:09:38.870083 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46" Jan 29 11:09:38 crc kubenswrapper[4593]: I0129 11:09:38.874884 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62" event={"ID":"2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2","Type":"ContainerStarted","Data":"5778f62d4ff3a173a41a681e0dcab626cd20931cea220413f9fe2b0952b54566"} Jan 29 11:09:38 crc kubenswrapper[4593]: I0129 11:09:38.888626 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-fdf6c7869-trqgk" podStartSLOduration=4.888606329 podStartE2EDuration="4.888606329s" podCreationTimestamp="2026-01-29 11:09:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:09:35.866107672 +0000 UTC m=+641.739141883" watchObservedRunningTime="2026-01-29 11:09:38.888606329 +0000 UTC m=+644.761640530" Jan 29 11:09:38 crc kubenswrapper[4593]: I0129 11:09:38.889339 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-q2lbc" podStartSLOduration=2.108636813 podStartE2EDuration="5.889333908s" podCreationTimestamp="2026-01-29 11:09:33 +0000 UTC" firstStartedPulling="2026-01-29 11:09:34.259140734 +0000 UTC m=+640.132174925" lastFinishedPulling="2026-01-29 11:09:38.039837829 +0000 UTC m=+643.912872020" observedRunningTime="2026-01-29 11:09:38.888848696 +0000 UTC m=+644.761882897" watchObservedRunningTime="2026-01-29 11:09:38.889333908 +0000 UTC m=+644.762368109" Jan 29 11:09:38 crc kubenswrapper[4593]: I0129 11:09:38.917409 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46" podStartSLOduration=2.928762567 podStartE2EDuration="5.917384908s" podCreationTimestamp="2026-01-29 11:09:33 +0000 UTC" firstStartedPulling="2026-01-29 11:09:35.069157228 +0000 UTC m=+640.942191419" lastFinishedPulling="2026-01-29 11:09:38.057779569 +0000 UTC m=+643.930813760" observedRunningTime="2026-01-29 11:09:38.907197606 +0000 UTC m=+644.780231807" watchObservedRunningTime="2026-01-29 11:09:38.917384908 +0000 UTC m=+644.790419109" Jan 29 11:09:38 crc kubenswrapper[4593]: I0129 11:09:38.936770 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62" podStartSLOduration=2.191935602 podStartE2EDuration="4.936747986s" podCreationTimestamp="2026-01-29 11:09:34 +0000 UTC" firstStartedPulling="2026-01-29 11:09:35.294212894 +0000 UTC m=+641.167247085" lastFinishedPulling="2026-01-29 11:09:38.039025278 +0000 UTC m=+643.912059469" observedRunningTime="2026-01-29 11:09:38.932260316 +0000 UTC m=+644.805294507" watchObservedRunningTime="2026-01-29 11:09:38.936747986 +0000 UTC m=+644.809782177" Jan 29 11:09:40 crc kubenswrapper[4593]: I0129 11:09:40.909271 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-q2995" event={"ID":"7a32568f-244c-432b-8186-683e8bc10371","Type":"ContainerStarted","Data":"0c4c940f37c68347cf0f5c8998f22fb55b3baf40d61156dc7955df52023fff26"} Jan 29 11:09:44 crc kubenswrapper[4593]: I0129 11:09:44.256522 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-q2lbc" Jan 29 11:09:44 crc kubenswrapper[4593]: I0129 11:09:44.277423 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-q2995" podStartSLOduration=5.290025509 podStartE2EDuration="11.277403643s" podCreationTimestamp="2026-01-29 11:09:33 +0000 UTC" firstStartedPulling="2026-01-29 11:09:34.713368717 +0000 UTC m=+640.586402908" lastFinishedPulling="2026-01-29 11:09:40.700746841 +0000 UTC m=+646.573781042" observedRunningTime="2026-01-29 11:09:40.933044351 +0000 UTC m=+646.806078552" watchObservedRunningTime="2026-01-29 11:09:44.277403643 +0000 UTC m=+650.150437844" Jan 29 11:09:44 crc kubenswrapper[4593]: I0129 11:09:44.689620 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:44 crc kubenswrapper[4593]: I0129 11:09:44.689737 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:44 crc kubenswrapper[4593]: I0129 11:09:44.694750 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:44 crc kubenswrapper[4593]: I0129 11:09:44.938076 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:44 crc kubenswrapper[4593]: I0129 11:09:44.993653 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-8425v"] Jan 29 11:09:54 crc kubenswrapper[4593]: I0129 11:09:54.781183 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46" Jan 29 11:10:06 crc kubenswrapper[4593]: I0129 11:10:06.654927 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz"] Jan 29 11:10:06 crc kubenswrapper[4593]: I0129 11:10:06.656944 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" Jan 29 11:10:06 crc kubenswrapper[4593]: I0129 11:10:06.663324 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 11:10:06 crc kubenswrapper[4593]: I0129 11:10:06.664125 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz"] Jan 29 11:10:06 crc kubenswrapper[4593]: I0129 11:10:06.682403 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz\" (UID: \"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" Jan 29 11:10:06 crc kubenswrapper[4593]: I0129 11:10:06.682503 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz\" (UID: \"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" Jan 29 11:10:06 crc kubenswrapper[4593]: I0129 11:10:06.682539 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4kmc\" (UniqueName: \"kubernetes.io/projected/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-kube-api-access-p4kmc\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz\" (UID: \"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" Jan 29 11:10:06 crc kubenswrapper[4593]: I0129 11:10:06.783616 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz\" (UID: \"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" Jan 29 11:10:06 crc kubenswrapper[4593]: I0129 11:10:06.783673 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4kmc\" (UniqueName: \"kubernetes.io/projected/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-kube-api-access-p4kmc\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz\" (UID: \"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" Jan 29 11:10:06 crc kubenswrapper[4593]: I0129 11:10:06.783723 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz\" (UID: \"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" Jan 29 11:10:06 crc kubenswrapper[4593]: I0129 11:10:06.784143 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz\" (UID: \"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" Jan 29 11:10:06 crc kubenswrapper[4593]: I0129 11:10:06.784157 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz\" (UID: \"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" Jan 29 11:10:06 crc kubenswrapper[4593]: I0129 11:10:06.810251 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4kmc\" (UniqueName: \"kubernetes.io/projected/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-kube-api-access-p4kmc\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz\" (UID: \"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" Jan 29 11:10:06 crc kubenswrapper[4593]: I0129 11:10:06.975565 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" Jan 29 11:10:07 crc kubenswrapper[4593]: I0129 11:10:07.276354 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz"] Jan 29 11:10:08 crc kubenswrapper[4593]: I0129 11:10:08.094685 4593 generic.go:334] "Generic (PLEG): container finished" podID="ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11" containerID="2f9e8302f58d43495da3546dd373f31c2ec8f1080059c2177b2216fe37d06827" exitCode=0 Jan 29 11:10:08 crc kubenswrapper[4593]: I0129 11:10:08.095048 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" event={"ID":"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11","Type":"ContainerDied","Data":"2f9e8302f58d43495da3546dd373f31c2ec8f1080059c2177b2216fe37d06827"} Jan 29 11:10:08 crc kubenswrapper[4593]: I0129 11:10:08.095091 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" event={"ID":"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11","Type":"ContainerStarted","Data":"073890ae1de6de6485004546b26f86a67ff11f6fb88351c22cfe65b1c90a225d"} Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.052822 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-8425v" podUID="ccb12507-4eef-467d-885d-982c68807bda" containerName="console" containerID="cri-o://479ab71a20268cace33237c302625fff890b4d521372542cf861c6e0b4faad5f" gracePeriod=15 Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.111718 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" event={"ID":"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11","Type":"ContainerDied","Data":"dcdc4a58e23cff241a1ebc2410e2e100599d977a3ac38f3d95dd13179d23922f"} Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.111466 4593 generic.go:334] "Generic (PLEG): container finished" podID="ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11" containerID="dcdc4a58e23cff241a1ebc2410e2e100599d977a3ac38f3d95dd13179d23922f" exitCode=0 Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.512699 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-8425v_ccb12507-4eef-467d-885d-982c68807bda/console/0.log" Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.512955 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.635905 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-trusted-ca-bundle\") pod \"ccb12507-4eef-467d-885d-982c68807bda\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.635956 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccb12507-4eef-467d-885d-982c68807bda-console-serving-cert\") pod \"ccb12507-4eef-467d-885d-982c68807bda\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.635984 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ccb12507-4eef-467d-885d-982c68807bda-console-oauth-config\") pod \"ccb12507-4eef-467d-885d-982c68807bda\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.636039 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-service-ca\") pod \"ccb12507-4eef-467d-885d-982c68807bda\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.636095 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-console-config\") pod \"ccb12507-4eef-467d-885d-982c68807bda\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.636132 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-oauth-serving-cert\") pod \"ccb12507-4eef-467d-885d-982c68807bda\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.636157 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57zkh\" (UniqueName: \"kubernetes.io/projected/ccb12507-4eef-467d-885d-982c68807bda-kube-api-access-57zkh\") pod \"ccb12507-4eef-467d-885d-982c68807bda\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.636829 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-service-ca" (OuterVolumeSpecName: "service-ca") pod "ccb12507-4eef-467d-885d-982c68807bda" (UID: "ccb12507-4eef-467d-885d-982c68807bda"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.636841 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ccb12507-4eef-467d-885d-982c68807bda" (UID: "ccb12507-4eef-467d-885d-982c68807bda"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.636851 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ccb12507-4eef-467d-885d-982c68807bda" (UID: "ccb12507-4eef-467d-885d-982c68807bda"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.636985 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-console-config" (OuterVolumeSpecName: "console-config") pod "ccb12507-4eef-467d-885d-982c68807bda" (UID: "ccb12507-4eef-467d-885d-982c68807bda"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.640923 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb12507-4eef-467d-885d-982c68807bda-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ccb12507-4eef-467d-885d-982c68807bda" (UID: "ccb12507-4eef-467d-885d-982c68807bda"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.640961 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccb12507-4eef-467d-885d-982c68807bda-kube-api-access-57zkh" (OuterVolumeSpecName: "kube-api-access-57zkh") pod "ccb12507-4eef-467d-885d-982c68807bda" (UID: "ccb12507-4eef-467d-885d-982c68807bda"). InnerVolumeSpecName "kube-api-access-57zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.648275 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb12507-4eef-467d-885d-982c68807bda-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ccb12507-4eef-467d-885d-982c68807bda" (UID: "ccb12507-4eef-467d-885d-982c68807bda"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.737793 4593 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.737841 4593 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.737854 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57zkh\" (UniqueName: \"kubernetes.io/projected/ccb12507-4eef-467d-885d-982c68807bda-kube-api-access-57zkh\") on node \"crc\" DevicePath \"\"" Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.737868 4593 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.737879 4593 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccb12507-4eef-467d-885d-982c68807bda-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.737889 4593 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ccb12507-4eef-467d-885d-982c68807bda-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.737899 4593 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:10:11 crc kubenswrapper[4593]: I0129 11:10:11.124548 4593 generic.go:334] "Generic (PLEG): container finished" podID="ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11" containerID="f1e0660cfa2f6090117b5c5883f25509dd5a8fa838ee86718510846b105608ae" exitCode=0 Jan 29 11:10:11 crc kubenswrapper[4593]: I0129 11:10:11.124653 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" event={"ID":"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11","Type":"ContainerDied","Data":"f1e0660cfa2f6090117b5c5883f25509dd5a8fa838ee86718510846b105608ae"} Jan 29 11:10:11 crc kubenswrapper[4593]: I0129 11:10:11.128228 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-8425v_ccb12507-4eef-467d-885d-982c68807bda/console/0.log" Jan 29 11:10:11 crc kubenswrapper[4593]: I0129 11:10:11.128281 4593 generic.go:334] "Generic (PLEG): container finished" podID="ccb12507-4eef-467d-885d-982c68807bda" containerID="479ab71a20268cace33237c302625fff890b4d521372542cf861c6e0b4faad5f" exitCode=2 Jan 29 11:10:11 crc kubenswrapper[4593]: I0129 11:10:11.128309 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8425v" event={"ID":"ccb12507-4eef-467d-885d-982c68807bda","Type":"ContainerDied","Data":"479ab71a20268cace33237c302625fff890b4d521372542cf861c6e0b4faad5f"} Jan 29 11:10:11 crc kubenswrapper[4593]: I0129 11:10:11.128336 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8425v" event={"ID":"ccb12507-4eef-467d-885d-982c68807bda","Type":"ContainerDied","Data":"b2d3338b1514b5c7e9256324d64b1f803fa4ccbc8cc1a14cc26386a3d7708bb8"} Jan 29 11:10:11 crc kubenswrapper[4593]: I0129 11:10:11.128355 4593 scope.go:117] "RemoveContainer" containerID="479ab71a20268cace33237c302625fff890b4d521372542cf861c6e0b4faad5f" Jan 29 11:10:11 crc kubenswrapper[4593]: I0129 11:10:11.128388 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:10:11 crc kubenswrapper[4593]: I0129 11:10:11.150145 4593 scope.go:117] "RemoveContainer" containerID="479ab71a20268cace33237c302625fff890b4d521372542cf861c6e0b4faad5f" Jan 29 11:10:11 crc kubenswrapper[4593]: E0129 11:10:11.151120 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"479ab71a20268cace33237c302625fff890b4d521372542cf861c6e0b4faad5f\": container with ID starting with 479ab71a20268cace33237c302625fff890b4d521372542cf861c6e0b4faad5f not found: ID does not exist" containerID="479ab71a20268cace33237c302625fff890b4d521372542cf861c6e0b4faad5f" Jan 29 11:10:11 crc kubenswrapper[4593]: I0129 11:10:11.151167 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"479ab71a20268cace33237c302625fff890b4d521372542cf861c6e0b4faad5f"} err="failed to get container status \"479ab71a20268cace33237c302625fff890b4d521372542cf861c6e0b4faad5f\": rpc error: code = NotFound desc = could not find container \"479ab71a20268cace33237c302625fff890b4d521372542cf861c6e0b4faad5f\": container with ID starting with 479ab71a20268cace33237c302625fff890b4d521372542cf861c6e0b4faad5f not found: ID does not exist" Jan 29 11:10:11 crc kubenswrapper[4593]: I0129 11:10:11.159833 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-8425v"] Jan 29 11:10:11 crc kubenswrapper[4593]: I0129 11:10:11.166045 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-8425v"] Jan 29 11:10:12 crc kubenswrapper[4593]: I0129 11:10:12.348874 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" Jan 29 11:10:12 crc kubenswrapper[4593]: I0129 11:10:12.461592 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-util\") pod \"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11\" (UID: \"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11\") " Jan 29 11:10:12 crc kubenswrapper[4593]: I0129 11:10:12.461773 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4kmc\" (UniqueName: \"kubernetes.io/projected/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-kube-api-access-p4kmc\") pod \"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11\" (UID: \"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11\") " Jan 29 11:10:12 crc kubenswrapper[4593]: I0129 11:10:12.461907 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-bundle\") pod \"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11\" (UID: \"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11\") " Jan 29 11:10:12 crc kubenswrapper[4593]: I0129 11:10:12.463367 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-bundle" (OuterVolumeSpecName: "bundle") pod "ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11" (UID: "ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:10:12 crc kubenswrapper[4593]: I0129 11:10:12.477680 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-kube-api-access-p4kmc" (OuterVolumeSpecName: "kube-api-access-p4kmc") pod "ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11" (UID: "ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11"). InnerVolumeSpecName "kube-api-access-p4kmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:10:12 crc kubenswrapper[4593]: I0129 11:10:12.483375 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-util" (OuterVolumeSpecName: "util") pod "ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11" (UID: "ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:10:12 crc kubenswrapper[4593]: I0129 11:10:12.563450 4593 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:10:12 crc kubenswrapper[4593]: I0129 11:10:12.563480 4593 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-util\") on node \"crc\" DevicePath \"\"" Jan 29 11:10:12 crc kubenswrapper[4593]: I0129 11:10:12.563489 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4kmc\" (UniqueName: \"kubernetes.io/projected/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-kube-api-access-p4kmc\") on node \"crc\" DevicePath \"\"" Jan 29 11:10:13 crc kubenswrapper[4593]: I0129 11:10:13.083070 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccb12507-4eef-467d-885d-982c68807bda" path="/var/lib/kubelet/pods/ccb12507-4eef-467d-885d-982c68807bda/volumes" Jan 29 11:10:13 crc kubenswrapper[4593]: I0129 11:10:13.143330 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" event={"ID":"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11","Type":"ContainerDied","Data":"073890ae1de6de6485004546b26f86a67ff11f6fb88351c22cfe65b1c90a225d"} Jan 29 11:10:13 crc kubenswrapper[4593]: I0129 11:10:13.143373 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="073890ae1de6de6485004546b26f86a67ff11f6fb88351c22cfe65b1c90a225d" Jan 29 11:10:13 crc kubenswrapper[4593]: I0129 11:10:13.143383 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.502164 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk"] Jan 29 11:10:22 crc kubenswrapper[4593]: E0129 11:10:22.503512 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11" containerName="extract" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.503528 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11" containerName="extract" Jan 29 11:10:22 crc kubenswrapper[4593]: E0129 11:10:22.503548 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccb12507-4eef-467d-885d-982c68807bda" containerName="console" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.503556 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccb12507-4eef-467d-885d-982c68807bda" containerName="console" Jan 29 11:10:22 crc kubenswrapper[4593]: E0129 11:10:22.503585 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11" containerName="pull" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.503593 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11" containerName="pull" Jan 29 11:10:22 crc kubenswrapper[4593]: E0129 11:10:22.503607 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11" containerName="util" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.503614 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11" containerName="util" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.504379 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11" containerName="extract" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.504410 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccb12507-4eef-467d-885d-982c68807bda" containerName="console" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.505472 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.508942 4593 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.509342 4593 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.510517 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.510782 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.510988 4593 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-gl72r" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.535951 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk"] Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.687343 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmh2k\" (UniqueName: \"kubernetes.io/projected/421156e9-d8d3-4112-bd58-d09c40a70a12-kube-api-access-vmh2k\") pod \"metallb-operator-controller-manager-5bf4d9f4bd-ll9bk\" (UID: \"421156e9-d8d3-4112-bd58-d09c40a70a12\") " pod="metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.687761 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/421156e9-d8d3-4112-bd58-d09c40a70a12-apiservice-cert\") pod \"metallb-operator-controller-manager-5bf4d9f4bd-ll9bk\" (UID: \"421156e9-d8d3-4112-bd58-d09c40a70a12\") " pod="metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.687831 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/421156e9-d8d3-4112-bd58-d09c40a70a12-webhook-cert\") pod \"metallb-operator-controller-manager-5bf4d9f4bd-ll9bk\" (UID: \"421156e9-d8d3-4112-bd58-d09c40a70a12\") " pod="metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.789153 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmh2k\" (UniqueName: \"kubernetes.io/projected/421156e9-d8d3-4112-bd58-d09c40a70a12-kube-api-access-vmh2k\") pod \"metallb-operator-controller-manager-5bf4d9f4bd-ll9bk\" (UID: \"421156e9-d8d3-4112-bd58-d09c40a70a12\") " pod="metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.789498 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/421156e9-d8d3-4112-bd58-d09c40a70a12-apiservice-cert\") pod \"metallb-operator-controller-manager-5bf4d9f4bd-ll9bk\" (UID: \"421156e9-d8d3-4112-bd58-d09c40a70a12\") " pod="metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.789688 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/421156e9-d8d3-4112-bd58-d09c40a70a12-webhook-cert\") pod \"metallb-operator-controller-manager-5bf4d9f4bd-ll9bk\" (UID: \"421156e9-d8d3-4112-bd58-d09c40a70a12\") " pod="metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.796846 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/421156e9-d8d3-4112-bd58-d09c40a70a12-webhook-cert\") pod \"metallb-operator-controller-manager-5bf4d9f4bd-ll9bk\" (UID: \"421156e9-d8d3-4112-bd58-d09c40a70a12\") " pod="metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.808401 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/421156e9-d8d3-4112-bd58-d09c40a70a12-apiservice-cert\") pod \"metallb-operator-controller-manager-5bf4d9f4bd-ll9bk\" (UID: \"421156e9-d8d3-4112-bd58-d09c40a70a12\") " pod="metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.831029 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmh2k\" (UniqueName: \"kubernetes.io/projected/421156e9-d8d3-4112-bd58-d09c40a70a12-kube-api-access-vmh2k\") pod \"metallb-operator-controller-manager-5bf4d9f4bd-ll9bk\" (UID: \"421156e9-d8d3-4112-bd58-d09c40a70a12\") " pod="metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.838851 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4"] Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.839832 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.843191 4593 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.843736 4593 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.844000 4593 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-5nljv" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.853946 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4"] Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.992330 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3381187-83f6-4877-8d72-3ed30f360a86-apiservice-cert\") pod \"metallb-operator-webhook-server-7fdc78c47c-w2tv4\" (UID: \"c3381187-83f6-4877-8d72-3ed30f360a86\") " pod="metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.992382 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c3381187-83f6-4877-8d72-3ed30f360a86-webhook-cert\") pod \"metallb-operator-webhook-server-7fdc78c47c-w2tv4\" (UID: \"c3381187-83f6-4877-8d72-3ed30f360a86\") " pod="metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.992451 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlh76\" (UniqueName: \"kubernetes.io/projected/c3381187-83f6-4877-8d72-3ed30f360a86-kube-api-access-hlh76\") pod \"metallb-operator-webhook-server-7fdc78c47c-w2tv4\" (UID: \"c3381187-83f6-4877-8d72-3ed30f360a86\") " pod="metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4" Jan 29 11:10:23 crc kubenswrapper[4593]: I0129 11:10:23.093106 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlh76\" (UniqueName: \"kubernetes.io/projected/c3381187-83f6-4877-8d72-3ed30f360a86-kube-api-access-hlh76\") pod \"metallb-operator-webhook-server-7fdc78c47c-w2tv4\" (UID: \"c3381187-83f6-4877-8d72-3ed30f360a86\") " pod="metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4" Jan 29 11:10:23 crc kubenswrapper[4593]: I0129 11:10:23.093162 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3381187-83f6-4877-8d72-3ed30f360a86-apiservice-cert\") pod \"metallb-operator-webhook-server-7fdc78c47c-w2tv4\" (UID: \"c3381187-83f6-4877-8d72-3ed30f360a86\") " pod="metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4" Jan 29 11:10:23 crc kubenswrapper[4593]: I0129 11:10:23.093195 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c3381187-83f6-4877-8d72-3ed30f360a86-webhook-cert\") pod \"metallb-operator-webhook-server-7fdc78c47c-w2tv4\" (UID: \"c3381187-83f6-4877-8d72-3ed30f360a86\") " pod="metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4" Jan 29 11:10:23 crc kubenswrapper[4593]: I0129 11:10:23.096778 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c3381187-83f6-4877-8d72-3ed30f360a86-webhook-cert\") pod \"metallb-operator-webhook-server-7fdc78c47c-w2tv4\" (UID: \"c3381187-83f6-4877-8d72-3ed30f360a86\") " pod="metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4" Jan 29 11:10:23 crc kubenswrapper[4593]: I0129 11:10:23.108252 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3381187-83f6-4877-8d72-3ed30f360a86-apiservice-cert\") pod \"metallb-operator-webhook-server-7fdc78c47c-w2tv4\" (UID: \"c3381187-83f6-4877-8d72-3ed30f360a86\") " pod="metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4" Jan 29 11:10:23 crc kubenswrapper[4593]: I0129 11:10:23.121134 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk" Jan 29 11:10:23 crc kubenswrapper[4593]: I0129 11:10:23.154999 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlh76\" (UniqueName: \"kubernetes.io/projected/c3381187-83f6-4877-8d72-3ed30f360a86-kube-api-access-hlh76\") pod \"metallb-operator-webhook-server-7fdc78c47c-w2tv4\" (UID: \"c3381187-83f6-4877-8d72-3ed30f360a86\") " pod="metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4" Jan 29 11:10:23 crc kubenswrapper[4593]: I0129 11:10:23.186769 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4" Jan 29 11:10:23 crc kubenswrapper[4593]: I0129 11:10:23.587109 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4"] Jan 29 11:10:23 crc kubenswrapper[4593]: I0129 11:10:23.708588 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk"] Jan 29 11:10:24 crc kubenswrapper[4593]: I0129 11:10:24.208102 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk" event={"ID":"421156e9-d8d3-4112-bd58-d09c40a70a12","Type":"ContainerStarted","Data":"de8d47ca6715760c776d46fe1e47f8c9ba0ffa5f00135b86c26bccffbd4ebc85"} Jan 29 11:10:24 crc kubenswrapper[4593]: I0129 11:10:24.210116 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4" event={"ID":"c3381187-83f6-4877-8d72-3ed30f360a86","Type":"ContainerStarted","Data":"561adee80387774a85d164bd590a76efa44ea14f07e093f3d278546b2b2f389b"} Jan 29 11:10:30 crc kubenswrapper[4593]: I0129 11:10:30.253227 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4" event={"ID":"c3381187-83f6-4877-8d72-3ed30f360a86","Type":"ContainerStarted","Data":"da847d1ec79e66e150dac98a643a705701e8adbd485dba899b5f5eb68d3b68f1"} Jan 29 11:10:30 crc kubenswrapper[4593]: I0129 11:10:30.253783 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4" Jan 29 11:10:30 crc kubenswrapper[4593]: I0129 11:10:30.254674 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk" event={"ID":"421156e9-d8d3-4112-bd58-d09c40a70a12","Type":"ContainerStarted","Data":"6478b453cfe7642626d97fd9fc7023a2fd10c542d2e3f8ed40bffc629a6d68aa"} Jan 29 11:10:30 crc kubenswrapper[4593]: I0129 11:10:30.254876 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk" Jan 29 11:10:30 crc kubenswrapper[4593]: I0129 11:10:30.275426 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4" podStartSLOduration=2.273844272 podStartE2EDuration="8.275409067s" podCreationTimestamp="2026-01-29 11:10:22 +0000 UTC" firstStartedPulling="2026-01-29 11:10:23.598141914 +0000 UTC m=+689.471176105" lastFinishedPulling="2026-01-29 11:10:29.599706709 +0000 UTC m=+695.472740900" observedRunningTime="2026-01-29 11:10:30.275397277 +0000 UTC m=+696.148431468" watchObservedRunningTime="2026-01-29 11:10:30.275409067 +0000 UTC m=+696.148443258" Jan 29 11:10:30 crc kubenswrapper[4593]: I0129 11:10:30.294618 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk" podStartSLOduration=2.4254703060000002 podStartE2EDuration="8.294600331s" podCreationTimestamp="2026-01-29 11:10:22 +0000 UTC" firstStartedPulling="2026-01-29 11:10:23.713357312 +0000 UTC m=+689.586391503" lastFinishedPulling="2026-01-29 11:10:29.582487337 +0000 UTC m=+695.455521528" observedRunningTime="2026-01-29 11:10:30.294026756 +0000 UTC m=+696.167060967" watchObservedRunningTime="2026-01-29 11:10:30.294600331 +0000 UTC m=+696.167634522" Jan 29 11:10:43 crc kubenswrapper[4593]: I0129 11:10:43.192419 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.124533 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.822773 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-dj42h"] Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.823508 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dj42h" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.827210 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-54s6j"] Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.830110 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.834085 4593 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.834289 4593 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.834420 4593 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-tqjk4" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.843507 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-dj42h"] Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.847891 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.907745 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-reloader\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.907797 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4d2v\" (UniqueName: \"kubernetes.io/projected/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-kube-api-access-m4d2v\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.907843 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbbpl\" (UniqueName: \"kubernetes.io/projected/45d808cf-80c4-4f7b-a172-76e4ecd9e37b-kube-api-access-zbbpl\") pod \"frr-k8s-webhook-server-7df86c4f6c-dj42h\" (UID: \"45d808cf-80c4-4f7b-a172-76e4ecd9e37b\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dj42h" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.907970 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-frr-conf\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.907999 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/45d808cf-80c4-4f7b-a172-76e4ecd9e37b-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-dj42h\" (UID: \"45d808cf-80c4-4f7b-a172-76e4ecd9e37b\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dj42h" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.908030 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-metrics\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.908051 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-metrics-certs\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.908105 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-frr-sockets\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.908170 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-frr-startup\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.927945 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-m77zw"] Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.928846 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-m77zw" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.931455 4593 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.932254 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.932426 4593 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.932616 4593 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-lhb8v" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.947718 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-hvqbg"] Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.948586 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-hvqbg" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.954432 4593 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.980005 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-hvqbg"] Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.008988 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-frr-startup\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.009060 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-reloader\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.009096 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4d2v\" (UniqueName: \"kubernetes.io/projected/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-kube-api-access-m4d2v\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.009118 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbbpl\" (UniqueName: \"kubernetes.io/projected/45d808cf-80c4-4f7b-a172-76e4ecd9e37b-kube-api-access-zbbpl\") pod \"frr-k8s-webhook-server-7df86c4f6c-dj42h\" (UID: \"45d808cf-80c4-4f7b-a172-76e4ecd9e37b\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dj42h" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.009160 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-frr-conf\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.009181 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/45d808cf-80c4-4f7b-a172-76e4ecd9e37b-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-dj42h\" (UID: \"45d808cf-80c4-4f7b-a172-76e4ecd9e37b\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dj42h" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.009219 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-metrics\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.009239 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-metrics-certs\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.009260 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-frr-sockets\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.011548 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-metrics\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.011793 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-frr-conf\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:04 crc kubenswrapper[4593]: E0129 11:11:04.011853 4593 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 29 11:11:04 crc kubenswrapper[4593]: E0129 11:11:04.011891 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-metrics-certs podName:9eb36e6e-e554-4b1a-9750-cd81c4c8d985 nodeName:}" failed. No retries permitted until 2026-01-29 11:11:04.511876251 +0000 UTC m=+730.384910442 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-metrics-certs") pod "frr-k8s-54s6j" (UID: "9eb36e6e-e554-4b1a-9750-cd81c4c8d985") : secret "frr-k8s-certs-secret" not found Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.012293 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-reloader\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.012497 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-frr-startup\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.012877 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-frr-sockets\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.036227 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/45d808cf-80c4-4f7b-a172-76e4ecd9e37b-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-dj42h\" (UID: \"45d808cf-80c4-4f7b-a172-76e4ecd9e37b\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dj42h" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.039839 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbbpl\" (UniqueName: \"kubernetes.io/projected/45d808cf-80c4-4f7b-a172-76e4ecd9e37b-kube-api-access-zbbpl\") pod \"frr-k8s-webhook-server-7df86c4f6c-dj42h\" (UID: \"45d808cf-80c4-4f7b-a172-76e4ecd9e37b\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dj42h" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.053951 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4d2v\" (UniqueName: \"kubernetes.io/projected/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-kube-api-access-m4d2v\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.111343 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4z4s\" (UniqueName: \"kubernetes.io/projected/37969e5d-3111-45cc-a711-da443a473c52-kube-api-access-d4z4s\") pod \"speaker-m77zw\" (UID: \"37969e5d-3111-45cc-a711-da443a473c52\") " pod="metallb-system/speaker-m77zw" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.111416 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3462ad7c-24f3-4c73-990d-a0f471d08d1d-metrics-certs\") pod \"controller-6968d8fdc4-hvqbg\" (UID: \"3462ad7c-24f3-4c73-990d-a0f471d08d1d\") " pod="metallb-system/controller-6968d8fdc4-hvqbg" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.111440 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3462ad7c-24f3-4c73-990d-a0f471d08d1d-cert\") pod \"controller-6968d8fdc4-hvqbg\" (UID: \"3462ad7c-24f3-4c73-990d-a0f471d08d1d\") " pod="metallb-system/controller-6968d8fdc4-hvqbg" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.111454 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37969e5d-3111-45cc-a711-da443a473c52-metrics-certs\") pod \"speaker-m77zw\" (UID: \"37969e5d-3111-45cc-a711-da443a473c52\") " pod="metallb-system/speaker-m77zw" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.111488 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/37969e5d-3111-45cc-a711-da443a473c52-metallb-excludel2\") pod \"speaker-m77zw\" (UID: \"37969e5d-3111-45cc-a711-da443a473c52\") " pod="metallb-system/speaker-m77zw" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.111509 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksjvz\" (UniqueName: \"kubernetes.io/projected/3462ad7c-24f3-4c73-990d-a0f471d08d1d-kube-api-access-ksjvz\") pod \"controller-6968d8fdc4-hvqbg\" (UID: \"3462ad7c-24f3-4c73-990d-a0f471d08d1d\") " pod="metallb-system/controller-6968d8fdc4-hvqbg" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.111526 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/37969e5d-3111-45cc-a711-da443a473c52-memberlist\") pod \"speaker-m77zw\" (UID: \"37969e5d-3111-45cc-a711-da443a473c52\") " pod="metallb-system/speaker-m77zw" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.145528 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dj42h" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.212182 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4z4s\" (UniqueName: \"kubernetes.io/projected/37969e5d-3111-45cc-a711-da443a473c52-kube-api-access-d4z4s\") pod \"speaker-m77zw\" (UID: \"37969e5d-3111-45cc-a711-da443a473c52\") " pod="metallb-system/speaker-m77zw" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.212300 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3462ad7c-24f3-4c73-990d-a0f471d08d1d-metrics-certs\") pod \"controller-6968d8fdc4-hvqbg\" (UID: \"3462ad7c-24f3-4c73-990d-a0f471d08d1d\") " pod="metallb-system/controller-6968d8fdc4-hvqbg" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.212345 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3462ad7c-24f3-4c73-990d-a0f471d08d1d-cert\") pod \"controller-6968d8fdc4-hvqbg\" (UID: \"3462ad7c-24f3-4c73-990d-a0f471d08d1d\") " pod="metallb-system/controller-6968d8fdc4-hvqbg" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.212362 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37969e5d-3111-45cc-a711-da443a473c52-metrics-certs\") pod \"speaker-m77zw\" (UID: \"37969e5d-3111-45cc-a711-da443a473c52\") " pod="metallb-system/speaker-m77zw" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.212408 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/37969e5d-3111-45cc-a711-da443a473c52-metallb-excludel2\") pod \"speaker-m77zw\" (UID: \"37969e5d-3111-45cc-a711-da443a473c52\") " pod="metallb-system/speaker-m77zw" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.212451 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksjvz\" (UniqueName: \"kubernetes.io/projected/3462ad7c-24f3-4c73-990d-a0f471d08d1d-kube-api-access-ksjvz\") pod \"controller-6968d8fdc4-hvqbg\" (UID: \"3462ad7c-24f3-4c73-990d-a0f471d08d1d\") " pod="metallb-system/controller-6968d8fdc4-hvqbg" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.212483 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/37969e5d-3111-45cc-a711-da443a473c52-memberlist\") pod \"speaker-m77zw\" (UID: \"37969e5d-3111-45cc-a711-da443a473c52\") " pod="metallb-system/speaker-m77zw" Jan 29 11:11:04 crc kubenswrapper[4593]: E0129 11:11:04.213796 4593 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 11:11:04 crc kubenswrapper[4593]: E0129 11:11:04.213839 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37969e5d-3111-45cc-a711-da443a473c52-memberlist podName:37969e5d-3111-45cc-a711-da443a473c52 nodeName:}" failed. No retries permitted until 2026-01-29 11:11:04.713826714 +0000 UTC m=+730.586860905 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/37969e5d-3111-45cc-a711-da443a473c52-memberlist") pod "speaker-m77zw" (UID: "37969e5d-3111-45cc-a711-da443a473c52") : secret "metallb-memberlist" not found Jan 29 11:11:04 crc kubenswrapper[4593]: E0129 11:11:04.213890 4593 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 29 11:11:04 crc kubenswrapper[4593]: E0129 11:11:04.213920 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37969e5d-3111-45cc-a711-da443a473c52-metrics-certs podName:37969e5d-3111-45cc-a711-da443a473c52 nodeName:}" failed. No retries permitted until 2026-01-29 11:11:04.713911777 +0000 UTC m=+730.586945968 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/37969e5d-3111-45cc-a711-da443a473c52-metrics-certs") pod "speaker-m77zw" (UID: "37969e5d-3111-45cc-a711-da443a473c52") : secret "speaker-certs-secret" not found Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.213918 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/37969e5d-3111-45cc-a711-da443a473c52-metallb-excludel2\") pod \"speaker-m77zw\" (UID: \"37969e5d-3111-45cc-a711-da443a473c52\") " pod="metallb-system/speaker-m77zw" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.216998 4593 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.220253 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3462ad7c-24f3-4c73-990d-a0f471d08d1d-metrics-certs\") pod \"controller-6968d8fdc4-hvqbg\" (UID: \"3462ad7c-24f3-4c73-990d-a0f471d08d1d\") " pod="metallb-system/controller-6968d8fdc4-hvqbg" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.233013 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksjvz\" (UniqueName: \"kubernetes.io/projected/3462ad7c-24f3-4c73-990d-a0f471d08d1d-kube-api-access-ksjvz\") pod \"controller-6968d8fdc4-hvqbg\" (UID: \"3462ad7c-24f3-4c73-990d-a0f471d08d1d\") " pod="metallb-system/controller-6968d8fdc4-hvqbg" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.239372 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3462ad7c-24f3-4c73-990d-a0f471d08d1d-cert\") pod \"controller-6968d8fdc4-hvqbg\" (UID: \"3462ad7c-24f3-4c73-990d-a0f471d08d1d\") " pod="metallb-system/controller-6968d8fdc4-hvqbg" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.246353 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4z4s\" (UniqueName: \"kubernetes.io/projected/37969e5d-3111-45cc-a711-da443a473c52-kube-api-access-d4z4s\") pod \"speaker-m77zw\" (UID: \"37969e5d-3111-45cc-a711-da443a473c52\") " pod="metallb-system/speaker-m77zw" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.268963 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-hvqbg" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.496360 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-hvqbg"] Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.515776 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-metrics-certs\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.520874 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-metrics-certs\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.622535 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-dj42h"] Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.718742 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/37969e5d-3111-45cc-a711-da443a473c52-memberlist\") pod \"speaker-m77zw\" (UID: \"37969e5d-3111-45cc-a711-da443a473c52\") " pod="metallb-system/speaker-m77zw" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.719162 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37969e5d-3111-45cc-a711-da443a473c52-metrics-certs\") pod \"speaker-m77zw\" (UID: \"37969e5d-3111-45cc-a711-da443a473c52\") " pod="metallb-system/speaker-m77zw" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.723811 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37969e5d-3111-45cc-a711-da443a473c52-metrics-certs\") pod \"speaker-m77zw\" (UID: \"37969e5d-3111-45cc-a711-da443a473c52\") " pod="metallb-system/speaker-m77zw" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.723884 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/37969e5d-3111-45cc-a711-da443a473c52-memberlist\") pod \"speaker-m77zw\" (UID: \"37969e5d-3111-45cc-a711-da443a473c52\") " pod="metallb-system/speaker-m77zw" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.757196 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.850157 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-m77zw" Jan 29 11:11:04 crc kubenswrapper[4593]: W0129 11:11:04.877134 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37969e5d_3111_45cc_a711_da443a473c52.slice/crio-c36f5b756d5f59f3c64e5d2c78c947ada68075f66368c2efc45a2bb45141ccb5 WatchSource:0}: Error finding container c36f5b756d5f59f3c64e5d2c78c947ada68075f66368c2efc45a2bb45141ccb5: Status 404 returned error can't find the container with id c36f5b756d5f59f3c64e5d2c78c947ada68075f66368c2efc45a2bb45141ccb5 Jan 29 11:11:05 crc kubenswrapper[4593]: I0129 11:11:05.454748 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-m77zw" event={"ID":"37969e5d-3111-45cc-a711-da443a473c52","Type":"ContainerStarted","Data":"da49a101b595e47000ffef939bc559d4f095da5a75f2d974d661e3b975516c67"} Jan 29 11:11:05 crc kubenswrapper[4593]: I0129 11:11:05.455090 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-m77zw" event={"ID":"37969e5d-3111-45cc-a711-da443a473c52","Type":"ContainerStarted","Data":"be297179f6d2b422103350b09de4b9b76026c9723c9cfd2f6d992b8bb2ed0691"} Jan 29 11:11:05 crc kubenswrapper[4593]: I0129 11:11:05.455107 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-m77zw" event={"ID":"37969e5d-3111-45cc-a711-da443a473c52","Type":"ContainerStarted","Data":"c36f5b756d5f59f3c64e5d2c78c947ada68075f66368c2efc45a2bb45141ccb5"} Jan 29 11:11:05 crc kubenswrapper[4593]: I0129 11:11:05.455393 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-m77zw" Jan 29 11:11:05 crc kubenswrapper[4593]: I0129 11:11:05.457253 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-hvqbg" event={"ID":"3462ad7c-24f3-4c73-990d-a0f471d08d1d","Type":"ContainerStarted","Data":"ab41ed837969b02ad1310e3af6420286facfbf8c8ff6f3eeeba2d02457aa25b2"} Jan 29 11:11:05 crc kubenswrapper[4593]: I0129 11:11:05.457296 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-hvqbg" event={"ID":"3462ad7c-24f3-4c73-990d-a0f471d08d1d","Type":"ContainerStarted","Data":"fde2705cf396d756261abc7932844c7198e4b2c63b7935d628ca0c77e740d14f"} Jan 29 11:11:05 crc kubenswrapper[4593]: I0129 11:11:05.457310 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-hvqbg" event={"ID":"3462ad7c-24f3-4c73-990d-a0f471d08d1d","Type":"ContainerStarted","Data":"ceb22d3eea8a11e5bbd98b0a2719c9fe00649a452d46e94bfbe80e4b69f88a81"} Jan 29 11:11:05 crc kubenswrapper[4593]: I0129 11:11:05.457421 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-hvqbg" Jan 29 11:11:05 crc kubenswrapper[4593]: I0129 11:11:05.458696 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dj42h" event={"ID":"45d808cf-80c4-4f7b-a172-76e4ecd9e37b","Type":"ContainerStarted","Data":"bb473d1e9c034889468f435b70a468a54243aba4aec3ff16c21c09b1e2914d66"} Jan 29 11:11:05 crc kubenswrapper[4593]: I0129 11:11:05.461217 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-54s6j" event={"ID":"9eb36e6e-e554-4b1a-9750-cd81c4c8d985","Type":"ContainerStarted","Data":"f68ef21eb5b648b42a784e45953e8e91e591e2788890a8901af9e3bdc88172f8"} Jan 29 11:11:05 crc kubenswrapper[4593]: I0129 11:11:05.535381 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-m77zw" podStartSLOduration=2.535357984 podStartE2EDuration="2.535357984s" podCreationTimestamp="2026-01-29 11:11:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:11:05.498013347 +0000 UTC m=+731.371047538" watchObservedRunningTime="2026-01-29 11:11:05.535357984 +0000 UTC m=+731.408392175" Jan 29 11:11:05 crc kubenswrapper[4593]: I0129 11:11:05.536665 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-hvqbg" podStartSLOduration=2.536659369 podStartE2EDuration="2.536659369s" podCreationTimestamp="2026-01-29 11:11:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:11:05.53114904 +0000 UTC m=+731.404183231" watchObservedRunningTime="2026-01-29 11:11:05.536659369 +0000 UTC m=+731.409693560" Jan 29 11:11:13 crc kubenswrapper[4593]: I0129 11:11:13.523305 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dj42h" event={"ID":"45d808cf-80c4-4f7b-a172-76e4ecd9e37b","Type":"ContainerStarted","Data":"417b06ec496d9e33ef508a9a5eb79c9cd4c80fda52502e3d84e968f700ccb089"} Jan 29 11:11:13 crc kubenswrapper[4593]: I0129 11:11:13.523922 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dj42h" Jan 29 11:11:13 crc kubenswrapper[4593]: I0129 11:11:13.525215 4593 generic.go:334] "Generic (PLEG): container finished" podID="9eb36e6e-e554-4b1a-9750-cd81c4c8d985" containerID="dfb27ea50318b4478862fccd52a5fefccc1ba739a62073569464ba01cca98a8e" exitCode=0 Jan 29 11:11:13 crc kubenswrapper[4593]: I0129 11:11:13.525253 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-54s6j" event={"ID":"9eb36e6e-e554-4b1a-9750-cd81c4c8d985","Type":"ContainerDied","Data":"dfb27ea50318b4478862fccd52a5fefccc1ba739a62073569464ba01cca98a8e"} Jan 29 11:11:13 crc kubenswrapper[4593]: I0129 11:11:13.548178 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dj42h" podStartSLOduration=2.365048033 podStartE2EDuration="10.548157712s" podCreationTimestamp="2026-01-29 11:11:03 +0000 UTC" firstStartedPulling="2026-01-29 11:11:04.632618312 +0000 UTC m=+730.505652503" lastFinishedPulling="2026-01-29 11:11:12.815727991 +0000 UTC m=+738.688762182" observedRunningTime="2026-01-29 11:11:13.543034134 +0000 UTC m=+739.416068335" watchObservedRunningTime="2026-01-29 11:11:13.548157712 +0000 UTC m=+739.421191903" Jan 29 11:11:14 crc kubenswrapper[4593]: I0129 11:11:14.274222 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-hvqbg" Jan 29 11:11:14 crc kubenswrapper[4593]: I0129 11:11:14.531721 4593 generic.go:334] "Generic (PLEG): container finished" podID="9eb36e6e-e554-4b1a-9750-cd81c4c8d985" containerID="60c8adf1de3cd4ec9fda6d23d3e35ec2660bce6b71ca05745cad2970c89c5e59" exitCode=0 Jan 29 11:11:14 crc kubenswrapper[4593]: I0129 11:11:14.532677 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-54s6j" event={"ID":"9eb36e6e-e554-4b1a-9750-cd81c4c8d985","Type":"ContainerDied","Data":"60c8adf1de3cd4ec9fda6d23d3e35ec2660bce6b71ca05745cad2970c89c5e59"} Jan 29 11:11:15 crc kubenswrapper[4593]: I0129 11:11:15.541953 4593 generic.go:334] "Generic (PLEG): container finished" podID="9eb36e6e-e554-4b1a-9750-cd81c4c8d985" containerID="a691257679622b12c0c30b77e732c2da4a5c5f89ca173684b80680b82f49e173" exitCode=0 Jan 29 11:11:15 crc kubenswrapper[4593]: I0129 11:11:15.541998 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-54s6j" event={"ID":"9eb36e6e-e554-4b1a-9750-cd81c4c8d985","Type":"ContainerDied","Data":"a691257679622b12c0c30b77e732c2da4a5c5f89ca173684b80680b82f49e173"} Jan 29 11:11:16 crc kubenswrapper[4593]: I0129 11:11:16.555922 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-54s6j" event={"ID":"9eb36e6e-e554-4b1a-9750-cd81c4c8d985","Type":"ContainerStarted","Data":"f4a6eee69aa21abde7a7382f10b3cfee8aa3fa419a520f709238bc39953e25f1"} Jan 29 11:11:16 crc kubenswrapper[4593]: I0129 11:11:16.556237 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-54s6j" event={"ID":"9eb36e6e-e554-4b1a-9750-cd81c4c8d985","Type":"ContainerStarted","Data":"2afd8f1ea5f7c176a015a86930077c08973a74690376e8246054566d18d12877"} Jan 29 11:11:16 crc kubenswrapper[4593]: I0129 11:11:16.556249 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-54s6j" event={"ID":"9eb36e6e-e554-4b1a-9750-cd81c4c8d985","Type":"ContainerStarted","Data":"abab468dc5e54306a35d20ff24be0f4739e779de410923a225f9d5d1fec78e0d"} Jan 29 11:11:16 crc kubenswrapper[4593]: I0129 11:11:16.556257 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-54s6j" event={"ID":"9eb36e6e-e554-4b1a-9750-cd81c4c8d985","Type":"ContainerStarted","Data":"aaf0b454a1aaeda4813d7fee96db1c3462a420a29ee8f7f3075266a386ddf639"} Jan 29 11:11:16 crc kubenswrapper[4593]: I0129 11:11:16.556265 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-54s6j" event={"ID":"9eb36e6e-e554-4b1a-9750-cd81c4c8d985","Type":"ContainerStarted","Data":"ff46cc5a5ebdfa6fc97224c333c7c70ad8060803b3f4aaeb1a3415a9b9155697"} Jan 29 11:11:17 crc kubenswrapper[4593]: I0129 11:11:17.565688 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-54s6j" event={"ID":"9eb36e6e-e554-4b1a-9750-cd81c4c8d985","Type":"ContainerStarted","Data":"03af9554e98ea3d9085abb6ea4c6b02d486e4ee0a46c81b62c95e7f7787da7dc"} Jan 29 11:11:17 crc kubenswrapper[4593]: I0129 11:11:17.566802 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:17 crc kubenswrapper[4593]: I0129 11:11:17.590822 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-54s6j" podStartSLOduration=6.662991825 podStartE2EDuration="14.590798943s" podCreationTimestamp="2026-01-29 11:11:03 +0000 UTC" firstStartedPulling="2026-01-29 11:11:04.868089618 +0000 UTC m=+730.741123809" lastFinishedPulling="2026-01-29 11:11:12.795896736 +0000 UTC m=+738.668930927" observedRunningTime="2026-01-29 11:11:17.585904702 +0000 UTC m=+743.458938903" watchObservedRunningTime="2026-01-29 11:11:17.590798943 +0000 UTC m=+743.463833134" Jan 29 11:11:19 crc kubenswrapper[4593]: I0129 11:11:19.757967 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:19 crc kubenswrapper[4593]: I0129 11:11:19.796229 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:24 crc kubenswrapper[4593]: I0129 11:11:24.150167 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dj42h" Jan 29 11:11:24 crc kubenswrapper[4593]: I0129 11:11:24.857761 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-m77zw" Jan 29 11:11:27 crc kubenswrapper[4593]: I0129 11:11:27.704903 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-kxm2v"] Jan 29 11:11:27 crc kubenswrapper[4593]: I0129 11:11:27.706183 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kxm2v" Jan 29 11:11:27 crc kubenswrapper[4593]: I0129 11:11:27.709795 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 29 11:11:27 crc kubenswrapper[4593]: I0129 11:11:27.710348 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 29 11:11:27 crc kubenswrapper[4593]: I0129 11:11:27.712082 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-9p9rv" Jan 29 11:11:27 crc kubenswrapper[4593]: I0129 11:11:27.729884 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kxm2v"] Jan 29 11:11:27 crc kubenswrapper[4593]: I0129 11:11:27.768146 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2q4w\" (UniqueName: \"kubernetes.io/projected/7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5-kube-api-access-r2q4w\") pod \"openstack-operator-index-kxm2v\" (UID: \"7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5\") " pod="openstack-operators/openstack-operator-index-kxm2v" Jan 29 11:11:27 crc kubenswrapper[4593]: I0129 11:11:27.869512 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2q4w\" (UniqueName: \"kubernetes.io/projected/7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5-kube-api-access-r2q4w\") pod \"openstack-operator-index-kxm2v\" (UID: \"7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5\") " pod="openstack-operators/openstack-operator-index-kxm2v" Jan 29 11:11:27 crc kubenswrapper[4593]: I0129 11:11:27.904518 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2q4w\" (UniqueName: \"kubernetes.io/projected/7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5-kube-api-access-r2q4w\") pod \"openstack-operator-index-kxm2v\" (UID: \"7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5\") " pod="openstack-operators/openstack-operator-index-kxm2v" Jan 29 11:11:28 crc kubenswrapper[4593]: I0129 11:11:28.027091 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kxm2v" Jan 29 11:11:28 crc kubenswrapper[4593]: I0129 11:11:28.502451 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kxm2v"] Jan 29 11:11:28 crc kubenswrapper[4593]: I0129 11:11:28.650729 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kxm2v" event={"ID":"7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5","Type":"ContainerStarted","Data":"d68540f4c1d7fff55c5e6157f96ccd88b42798a1072e01f0dfe99dc863e2bfa1"} Jan 29 11:11:31 crc kubenswrapper[4593]: I0129 11:11:31.035744 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-kxm2v"] Jan 29 11:11:31 crc kubenswrapper[4593]: I0129 11:11:31.644662 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-sbxwt"] Jan 29 11:11:31 crc kubenswrapper[4593]: I0129 11:11:31.647402 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sbxwt" Jan 29 11:11:31 crc kubenswrapper[4593]: I0129 11:11:31.657683 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sbxwt"] Jan 29 11:11:31 crc kubenswrapper[4593]: I0129 11:11:31.768907 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvv6t\" (UniqueName: \"kubernetes.io/projected/0661b605-afb6-4341-9703-ea25a3afc19d-kube-api-access-gvv6t\") pod \"openstack-operator-index-sbxwt\" (UID: \"0661b605-afb6-4341-9703-ea25a3afc19d\") " pod="openstack-operators/openstack-operator-index-sbxwt" Jan 29 11:11:31 crc kubenswrapper[4593]: I0129 11:11:31.870482 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvv6t\" (UniqueName: \"kubernetes.io/projected/0661b605-afb6-4341-9703-ea25a3afc19d-kube-api-access-gvv6t\") pod \"openstack-operator-index-sbxwt\" (UID: \"0661b605-afb6-4341-9703-ea25a3afc19d\") " pod="openstack-operators/openstack-operator-index-sbxwt" Jan 29 11:11:31 crc kubenswrapper[4593]: I0129 11:11:31.890668 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvv6t\" (UniqueName: \"kubernetes.io/projected/0661b605-afb6-4341-9703-ea25a3afc19d-kube-api-access-gvv6t\") pod \"openstack-operator-index-sbxwt\" (UID: \"0661b605-afb6-4341-9703-ea25a3afc19d\") " pod="openstack-operators/openstack-operator-index-sbxwt" Jan 29 11:11:31 crc kubenswrapper[4593]: I0129 11:11:31.974167 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sbxwt" Jan 29 11:11:34 crc kubenswrapper[4593]: I0129 11:11:34.580962 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sbxwt"] Jan 29 11:11:34 crc kubenswrapper[4593]: W0129 11:11:34.589149 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0661b605_afb6_4341_9703_ea25a3afc19d.slice/crio-71690bcd11fbc3d54cf07cff1aa7a7a034633c0514fffdefca4fdd0c8a7ab780 WatchSource:0}: Error finding container 71690bcd11fbc3d54cf07cff1aa7a7a034633c0514fffdefca4fdd0c8a7ab780: Status 404 returned error can't find the container with id 71690bcd11fbc3d54cf07cff1aa7a7a034633c0514fffdefca4fdd0c8a7ab780 Jan 29 11:11:34 crc kubenswrapper[4593]: I0129 11:11:34.699590 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kxm2v" event={"ID":"7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5","Type":"ContainerStarted","Data":"a0586d848e5813047592521239aecef586bd90512aeec3fbe57492fc9eaaeab1"} Jan 29 11:11:34 crc kubenswrapper[4593]: I0129 11:11:34.699605 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-kxm2v" podUID="7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5" containerName="registry-server" containerID="cri-o://a0586d848e5813047592521239aecef586bd90512aeec3fbe57492fc9eaaeab1" gracePeriod=2 Jan 29 11:11:34 crc kubenswrapper[4593]: I0129 11:11:34.704747 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sbxwt" event={"ID":"0661b605-afb6-4341-9703-ea25a3afc19d","Type":"ContainerStarted","Data":"71690bcd11fbc3d54cf07cff1aa7a7a034633c0514fffdefca4fdd0c8a7ab780"} Jan 29 11:11:34 crc kubenswrapper[4593]: I0129 11:11:34.767028 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:34 crc kubenswrapper[4593]: I0129 11:11:34.798155 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-kxm2v" podStartSLOduration=2.690120556 podStartE2EDuration="7.798138202s" podCreationTimestamp="2026-01-29 11:11:27 +0000 UTC" firstStartedPulling="2026-01-29 11:11:28.520462309 +0000 UTC m=+754.393496500" lastFinishedPulling="2026-01-29 11:11:33.628479955 +0000 UTC m=+759.501514146" observedRunningTime="2026-01-29 11:11:34.747075736 +0000 UTC m=+760.620109927" watchObservedRunningTime="2026-01-29 11:11:34.798138202 +0000 UTC m=+760.671172403" Jan 29 11:11:35 crc kubenswrapper[4593]: I0129 11:11:35.090923 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kxm2v" Jan 29 11:11:35 crc kubenswrapper[4593]: I0129 11:11:35.147245 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2q4w\" (UniqueName: \"kubernetes.io/projected/7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5-kube-api-access-r2q4w\") pod \"7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5\" (UID: \"7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5\") " Jan 29 11:11:35 crc kubenswrapper[4593]: I0129 11:11:35.152459 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5-kube-api-access-r2q4w" (OuterVolumeSpecName: "kube-api-access-r2q4w") pod "7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5" (UID: "7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5"). InnerVolumeSpecName "kube-api-access-r2q4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:11:35 crc kubenswrapper[4593]: I0129 11:11:35.249407 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2q4w\" (UniqueName: \"kubernetes.io/projected/7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5-kube-api-access-r2q4w\") on node \"crc\" DevicePath \"\"" Jan 29 11:11:35 crc kubenswrapper[4593]: I0129 11:11:35.712486 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sbxwt" event={"ID":"0661b605-afb6-4341-9703-ea25a3afc19d","Type":"ContainerStarted","Data":"9a696a11428c248a7b1d6ed9d4a2ec9d549276382fc56a651079e894a1eb7a0c"} Jan 29 11:11:35 crc kubenswrapper[4593]: I0129 11:11:35.714102 4593 generic.go:334] "Generic (PLEG): container finished" podID="7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5" containerID="a0586d848e5813047592521239aecef586bd90512aeec3fbe57492fc9eaaeab1" exitCode=0 Jan 29 11:11:35 crc kubenswrapper[4593]: I0129 11:11:35.714143 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kxm2v" event={"ID":"7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5","Type":"ContainerDied","Data":"a0586d848e5813047592521239aecef586bd90512aeec3fbe57492fc9eaaeab1"} Jan 29 11:11:35 crc kubenswrapper[4593]: I0129 11:11:35.714192 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kxm2v" event={"ID":"7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5","Type":"ContainerDied","Data":"d68540f4c1d7fff55c5e6157f96ccd88b42798a1072e01f0dfe99dc863e2bfa1"} Jan 29 11:11:35 crc kubenswrapper[4593]: I0129 11:11:35.714216 4593 scope.go:117] "RemoveContainer" containerID="a0586d848e5813047592521239aecef586bd90512aeec3fbe57492fc9eaaeab1" Jan 29 11:11:35 crc kubenswrapper[4593]: I0129 11:11:35.714692 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kxm2v" Jan 29 11:11:35 crc kubenswrapper[4593]: I0129 11:11:35.730396 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-sbxwt" podStartSLOduration=4.483980676 podStartE2EDuration="4.730375178s" podCreationTimestamp="2026-01-29 11:11:31 +0000 UTC" firstStartedPulling="2026-01-29 11:11:34.593371711 +0000 UTC m=+760.466405902" lastFinishedPulling="2026-01-29 11:11:34.839766213 +0000 UTC m=+760.712800404" observedRunningTime="2026-01-29 11:11:35.729875494 +0000 UTC m=+761.602909695" watchObservedRunningTime="2026-01-29 11:11:35.730375178 +0000 UTC m=+761.603409369" Jan 29 11:11:35 crc kubenswrapper[4593]: I0129 11:11:35.740988 4593 scope.go:117] "RemoveContainer" containerID="a0586d848e5813047592521239aecef586bd90512aeec3fbe57492fc9eaaeab1" Jan 29 11:11:35 crc kubenswrapper[4593]: E0129 11:11:35.741471 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0586d848e5813047592521239aecef586bd90512aeec3fbe57492fc9eaaeab1\": container with ID starting with a0586d848e5813047592521239aecef586bd90512aeec3fbe57492fc9eaaeab1 not found: ID does not exist" containerID="a0586d848e5813047592521239aecef586bd90512aeec3fbe57492fc9eaaeab1" Jan 29 11:11:35 crc kubenswrapper[4593]: I0129 11:11:35.741573 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0586d848e5813047592521239aecef586bd90512aeec3fbe57492fc9eaaeab1"} err="failed to get container status \"a0586d848e5813047592521239aecef586bd90512aeec3fbe57492fc9eaaeab1\": rpc error: code = NotFound desc = could not find container \"a0586d848e5813047592521239aecef586bd90512aeec3fbe57492fc9eaaeab1\": container with ID starting with a0586d848e5813047592521239aecef586bd90512aeec3fbe57492fc9eaaeab1 not found: ID does not exist" Jan 29 11:11:35 crc kubenswrapper[4593]: I0129 11:11:35.752327 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-kxm2v"] Jan 29 11:11:35 crc kubenswrapper[4593]: I0129 11:11:35.756455 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-kxm2v"] Jan 29 11:11:37 crc kubenswrapper[4593]: I0129 11:11:37.084598 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5" path="/var/lib/kubelet/pods/7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5/volumes" Jan 29 11:11:39 crc kubenswrapper[4593]: I0129 11:11:39.805825 4593 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 11:11:41 crc kubenswrapper[4593]: I0129 11:11:41.975745 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-sbxwt" Jan 29 11:11:41 crc kubenswrapper[4593]: I0129 11:11:41.976016 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-sbxwt" Jan 29 11:11:42 crc kubenswrapper[4593]: I0129 11:11:42.042887 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-sbxwt" Jan 29 11:11:42 crc kubenswrapper[4593]: I0129 11:11:42.783334 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-sbxwt" Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.077721 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc"] Jan 29 11:11:44 crc kubenswrapper[4593]: E0129 11:11:44.078214 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5" containerName="registry-server" Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.078226 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5" containerName="registry-server" Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.078369 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5" containerName="registry-server" Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.079293 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.082202 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-l67nj" Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.087108 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc"] Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.180499 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-bundle\") pod \"91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc\" (UID: \"d389d4ca-e0e5-4a15-8ff2-afa4745998fa\") " pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.180559 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbh6b\" (UniqueName: \"kubernetes.io/projected/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-kube-api-access-jbh6b\") pod \"91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc\" (UID: \"d389d4ca-e0e5-4a15-8ff2-afa4745998fa\") " pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.180732 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-util\") pod \"91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc\" (UID: \"d389d4ca-e0e5-4a15-8ff2-afa4745998fa\") " pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.281759 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-util\") pod \"91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc\" (UID: \"d389d4ca-e0e5-4a15-8ff2-afa4745998fa\") " pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.281825 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-bundle\") pod \"91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc\" (UID: \"d389d4ca-e0e5-4a15-8ff2-afa4745998fa\") " pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.281846 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbh6b\" (UniqueName: \"kubernetes.io/projected/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-kube-api-access-jbh6b\") pod \"91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc\" (UID: \"d389d4ca-e0e5-4a15-8ff2-afa4745998fa\") " pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.282462 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-util\") pod \"91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc\" (UID: \"d389d4ca-e0e5-4a15-8ff2-afa4745998fa\") " pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.282516 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-bundle\") pod \"91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc\" (UID: \"d389d4ca-e0e5-4a15-8ff2-afa4745998fa\") " pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.305372 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbh6b\" (UniqueName: \"kubernetes.io/projected/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-kube-api-access-jbh6b\") pod \"91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc\" (UID: \"d389d4ca-e0e5-4a15-8ff2-afa4745998fa\") " pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.399406 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.613066 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc"] Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.776580 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" event={"ID":"d389d4ca-e0e5-4a15-8ff2-afa4745998fa","Type":"ContainerStarted","Data":"49810152f3eae5df3cd44041b27b8d1aa920d4dabd2d3cd1fd576348c19adca0"} Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.776620 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" event={"ID":"d389d4ca-e0e5-4a15-8ff2-afa4745998fa","Type":"ContainerStarted","Data":"5d6dd77b97f1625ba0241d533476e086d054fbdffd6b227fc9db20889d1914c3"} Jan 29 11:11:45 crc kubenswrapper[4593]: I0129 11:11:45.784049 4593 generic.go:334] "Generic (PLEG): container finished" podID="d389d4ca-e0e5-4a15-8ff2-afa4745998fa" containerID="49810152f3eae5df3cd44041b27b8d1aa920d4dabd2d3cd1fd576348c19adca0" exitCode=0 Jan 29 11:11:45 crc kubenswrapper[4593]: I0129 11:11:45.784121 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" event={"ID":"d389d4ca-e0e5-4a15-8ff2-afa4745998fa","Type":"ContainerDied","Data":"49810152f3eae5df3cd44041b27b8d1aa920d4dabd2d3cd1fd576348c19adca0"} Jan 29 11:11:46 crc kubenswrapper[4593]: I0129 11:11:46.794128 4593 generic.go:334] "Generic (PLEG): container finished" podID="d389d4ca-e0e5-4a15-8ff2-afa4745998fa" containerID="56bc419c08dbd0401bac21f6b2226460477de8cd20a4a5bb2aa955c2785709aa" exitCode=0 Jan 29 11:11:46 crc kubenswrapper[4593]: I0129 11:11:46.794315 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" event={"ID":"d389d4ca-e0e5-4a15-8ff2-afa4745998fa","Type":"ContainerDied","Data":"56bc419c08dbd0401bac21f6b2226460477de8cd20a4a5bb2aa955c2785709aa"} Jan 29 11:11:47 crc kubenswrapper[4593]: I0129 11:11:47.813521 4593 generic.go:334] "Generic (PLEG): container finished" podID="d389d4ca-e0e5-4a15-8ff2-afa4745998fa" containerID="59edbce0d09644e6eb3a08d35e615c9401aa50707044d47ae64393a5974d0edc" exitCode=0 Jan 29 11:11:47 crc kubenswrapper[4593]: I0129 11:11:47.813573 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" event={"ID":"d389d4ca-e0e5-4a15-8ff2-afa4745998fa","Type":"ContainerDied","Data":"59edbce0d09644e6eb3a08d35e615c9401aa50707044d47ae64393a5974d0edc"} Jan 29 11:11:49 crc kubenswrapper[4593]: I0129 11:11:49.050333 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" Jan 29 11:11:49 crc kubenswrapper[4593]: I0129 11:11:49.226698 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-bundle\") pod \"d389d4ca-e0e5-4a15-8ff2-afa4745998fa\" (UID: \"d389d4ca-e0e5-4a15-8ff2-afa4745998fa\") " Jan 29 11:11:49 crc kubenswrapper[4593]: I0129 11:11:49.226808 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbh6b\" (UniqueName: \"kubernetes.io/projected/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-kube-api-access-jbh6b\") pod \"d389d4ca-e0e5-4a15-8ff2-afa4745998fa\" (UID: \"d389d4ca-e0e5-4a15-8ff2-afa4745998fa\") " Jan 29 11:11:49 crc kubenswrapper[4593]: I0129 11:11:49.227266 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-util\") pod \"d389d4ca-e0e5-4a15-8ff2-afa4745998fa\" (UID: \"d389d4ca-e0e5-4a15-8ff2-afa4745998fa\") " Jan 29 11:11:49 crc kubenswrapper[4593]: I0129 11:11:49.227382 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-bundle" (OuterVolumeSpecName: "bundle") pod "d389d4ca-e0e5-4a15-8ff2-afa4745998fa" (UID: "d389d4ca-e0e5-4a15-8ff2-afa4745998fa"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:11:49 crc kubenswrapper[4593]: I0129 11:11:49.227918 4593 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:11:49 crc kubenswrapper[4593]: I0129 11:11:49.232960 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-kube-api-access-jbh6b" (OuterVolumeSpecName: "kube-api-access-jbh6b") pod "d389d4ca-e0e5-4a15-8ff2-afa4745998fa" (UID: "d389d4ca-e0e5-4a15-8ff2-afa4745998fa"). InnerVolumeSpecName "kube-api-access-jbh6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:11:49 crc kubenswrapper[4593]: I0129 11:11:49.242816 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-util" (OuterVolumeSpecName: "util") pod "d389d4ca-e0e5-4a15-8ff2-afa4745998fa" (UID: "d389d4ca-e0e5-4a15-8ff2-afa4745998fa"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:11:49 crc kubenswrapper[4593]: I0129 11:11:49.329604 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbh6b\" (UniqueName: \"kubernetes.io/projected/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-kube-api-access-jbh6b\") on node \"crc\" DevicePath \"\"" Jan 29 11:11:49 crc kubenswrapper[4593]: I0129 11:11:49.329934 4593 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-util\") on node \"crc\" DevicePath \"\"" Jan 29 11:11:49 crc kubenswrapper[4593]: I0129 11:11:49.833338 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" event={"ID":"d389d4ca-e0e5-4a15-8ff2-afa4745998fa","Type":"ContainerDied","Data":"5d6dd77b97f1625ba0241d533476e086d054fbdffd6b227fc9db20889d1914c3"} Jan 29 11:11:49 crc kubenswrapper[4593]: I0129 11:11:49.833389 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d6dd77b97f1625ba0241d533476e086d054fbdffd6b227fc9db20889d1914c3" Jan 29 11:11:49 crc kubenswrapper[4593]: I0129 11:11:49.833453 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" Jan 29 11:11:56 crc kubenswrapper[4593]: I0129 11:11:56.175466 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-55ccc59995-d7xm7"] Jan 29 11:11:56 crc kubenswrapper[4593]: E0129 11:11:56.176057 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d389d4ca-e0e5-4a15-8ff2-afa4745998fa" containerName="pull" Jan 29 11:11:56 crc kubenswrapper[4593]: I0129 11:11:56.176072 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="d389d4ca-e0e5-4a15-8ff2-afa4745998fa" containerName="pull" Jan 29 11:11:56 crc kubenswrapper[4593]: E0129 11:11:56.176087 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d389d4ca-e0e5-4a15-8ff2-afa4745998fa" containerName="extract" Jan 29 11:11:56 crc kubenswrapper[4593]: I0129 11:11:56.176096 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="d389d4ca-e0e5-4a15-8ff2-afa4745998fa" containerName="extract" Jan 29 11:11:56 crc kubenswrapper[4593]: E0129 11:11:56.176120 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d389d4ca-e0e5-4a15-8ff2-afa4745998fa" containerName="util" Jan 29 11:11:56 crc kubenswrapper[4593]: I0129 11:11:56.176128 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="d389d4ca-e0e5-4a15-8ff2-afa4745998fa" containerName="util" Jan 29 11:11:56 crc kubenswrapper[4593]: I0129 11:11:56.176251 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="d389d4ca-e0e5-4a15-8ff2-afa4745998fa" containerName="extract" Jan 29 11:11:56 crc kubenswrapper[4593]: I0129 11:11:56.176800 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-55ccc59995-d7xm7" Jan 29 11:11:56 crc kubenswrapper[4593]: I0129 11:11:56.189502 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-45997" Jan 29 11:11:56 crc kubenswrapper[4593]: I0129 11:11:56.217237 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-55ccc59995-d7xm7"] Jan 29 11:11:56 crc kubenswrapper[4593]: I0129 11:11:56.324127 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwkl6\" (UniqueName: \"kubernetes.io/projected/c8e623f1-2830-4c78-b17a-6000f32978a3-kube-api-access-jwkl6\") pod \"openstack-operator-controller-init-55ccc59995-d7xm7\" (UID: \"c8e623f1-2830-4c78-b17a-6000f32978a3\") " pod="openstack-operators/openstack-operator-controller-init-55ccc59995-d7xm7" Jan 29 11:11:56 crc kubenswrapper[4593]: I0129 11:11:56.425862 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwkl6\" (UniqueName: \"kubernetes.io/projected/c8e623f1-2830-4c78-b17a-6000f32978a3-kube-api-access-jwkl6\") pod \"openstack-operator-controller-init-55ccc59995-d7xm7\" (UID: \"c8e623f1-2830-4c78-b17a-6000f32978a3\") " pod="openstack-operators/openstack-operator-controller-init-55ccc59995-d7xm7" Jan 29 11:11:56 crc kubenswrapper[4593]: I0129 11:11:56.449230 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwkl6\" (UniqueName: \"kubernetes.io/projected/c8e623f1-2830-4c78-b17a-6000f32978a3-kube-api-access-jwkl6\") pod \"openstack-operator-controller-init-55ccc59995-d7xm7\" (UID: \"c8e623f1-2830-4c78-b17a-6000f32978a3\") " pod="openstack-operators/openstack-operator-controller-init-55ccc59995-d7xm7" Jan 29 11:11:56 crc kubenswrapper[4593]: I0129 11:11:56.495209 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-55ccc59995-d7xm7" Jan 29 11:11:56 crc kubenswrapper[4593]: I0129 11:11:56.962296 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-55ccc59995-d7xm7"] Jan 29 11:11:57 crc kubenswrapper[4593]: I0129 11:11:57.881844 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-55ccc59995-d7xm7" event={"ID":"c8e623f1-2830-4c78-b17a-6000f32978a3","Type":"ContainerStarted","Data":"a9d11ab8be468bada64bb970bd51e89c9dfae48c3df541beddb88eefd0b0d741"} Jan 29 11:12:03 crc kubenswrapper[4593]: I0129 11:12:03.946030 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:12:03 crc kubenswrapper[4593]: I0129 11:12:03.946646 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:12:04 crc kubenswrapper[4593]: I0129 11:12:04.932152 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-55ccc59995-d7xm7" event={"ID":"c8e623f1-2830-4c78-b17a-6000f32978a3","Type":"ContainerStarted","Data":"a9d74499a95a4b3430bb3b0d4471e5f5640e815956d1986537d55802862f9574"} Jan 29 11:12:04 crc kubenswrapper[4593]: I0129 11:12:04.932538 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-55ccc59995-d7xm7" Jan 29 11:12:04 crc kubenswrapper[4593]: I0129 11:12:04.962241 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-55ccc59995-d7xm7" podStartSLOduration=2.191866503 podStartE2EDuration="8.96222711s" podCreationTimestamp="2026-01-29 11:11:56 +0000 UTC" firstStartedPulling="2026-01-29 11:11:56.965606074 +0000 UTC m=+782.838640255" lastFinishedPulling="2026-01-29 11:12:03.735966671 +0000 UTC m=+789.609000862" observedRunningTime="2026-01-29 11:12:04.961241616 +0000 UTC m=+790.834275807" watchObservedRunningTime="2026-01-29 11:12:04.96222711 +0000 UTC m=+790.835261291" Jan 29 11:12:16 crc kubenswrapper[4593]: I0129 11:12:16.499997 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-55ccc59995-d7xm7" Jan 29 11:12:33 crc kubenswrapper[4593]: I0129 11:12:33.946445 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:12:33 crc kubenswrapper[4593]: I0129 11:12:33.947068 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.286141 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-7hmqc"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.286988 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-7hmqc" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.290625 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-wk95c" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.291397 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7ns7q"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.292311 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7ns7q" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.294871 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-rqbh4" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.315131 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-7hmqc"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.323222 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7ns7q"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.336255 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sp6q\" (UniqueName: \"kubernetes.io/projected/c5e6d3a8-d6d9-4445-9708-28b88928333e-kube-api-access-4sp6q\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-7ns7q\" (UID: \"c5e6d3a8-d6d9-4445-9708-28b88928333e\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7ns7q" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.336362 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9spgf\" (UniqueName: \"kubernetes.io/projected/e35e9127-0337-4860-b938-bb477a408f1e-kube-api-access-9spgf\") pod \"cinder-operator-controller-manager-8d874c8fc-7hmqc\" (UID: \"e35e9127-0337-4860-b938-bb477a408f1e\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-7hmqc" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.364136 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-xw2pz"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.364904 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-xw2pz" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.367197 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-shh6b" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.383507 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-xw2pz"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.390319 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-2ml7m"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.391283 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2ml7m" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.395566 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-lnr6s" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.430316 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-xqcrc"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.431267 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-xqcrc" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.437098 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sp6q\" (UniqueName: \"kubernetes.io/projected/c5e6d3a8-d6d9-4445-9708-28b88928333e-kube-api-access-4sp6q\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-7ns7q\" (UID: \"c5e6d3a8-d6d9-4445-9708-28b88928333e\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7ns7q" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.437198 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb8q5\" (UniqueName: \"kubernetes.io/projected/499923d8-4593-4225-bc4c-6166003a0bb3-kube-api-access-mb8q5\") pod \"glance-operator-controller-manager-8886f4c47-2ml7m\" (UID: \"499923d8-4593-4225-bc4c-6166003a0bb3\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2ml7m" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.437244 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5xwf\" (UniqueName: \"kubernetes.io/projected/734187ee-1e17-4cdc-b3bb-cfbd6e424793-kube-api-access-k5xwf\") pod \"designate-operator-controller-manager-6d9697b7f4-xw2pz\" (UID: \"734187ee-1e17-4cdc-b3bb-cfbd6e424793\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-xw2pz" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.437276 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9spgf\" (UniqueName: \"kubernetes.io/projected/e35e9127-0337-4860-b938-bb477a408f1e-kube-api-access-9spgf\") pod \"cinder-operator-controller-manager-8d874c8fc-7hmqc\" (UID: \"e35e9127-0337-4860-b938-bb477a408f1e\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-7hmqc" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.440554 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-csc5k" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.456729 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-2ml7m"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.469099 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sp6q\" (UniqueName: \"kubernetes.io/projected/c5e6d3a8-d6d9-4445-9708-28b88928333e-kube-api-access-4sp6q\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-7ns7q\" (UID: \"c5e6d3a8-d6d9-4445-9708-28b88928333e\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7ns7q" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.475363 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9spgf\" (UniqueName: \"kubernetes.io/projected/e35e9127-0337-4860-b938-bb477a408f1e-kube-api-access-9spgf\") pod \"cinder-operator-controller-manager-8d874c8fc-7hmqc\" (UID: \"e35e9127-0337-4860-b938-bb477a408f1e\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-7hmqc" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.498970 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-98l2v"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.516128 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-98l2v"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.516376 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-98l2v" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.532653 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-m9h5b" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.535713 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.541552 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksppz\" (UniqueName: \"kubernetes.io/projected/50471b23-1d0d-4bd9-a66f-a89b3a39a612-kube-api-access-ksppz\") pod \"heat-operator-controller-manager-69d6db494d-xqcrc\" (UID: \"50471b23-1d0d-4bd9-a66f-a89b3a39a612\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-xqcrc" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.554580 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb8q5\" (UniqueName: \"kubernetes.io/projected/499923d8-4593-4225-bc4c-6166003a0bb3-kube-api-access-mb8q5\") pod \"glance-operator-controller-manager-8886f4c47-2ml7m\" (UID: \"499923d8-4593-4225-bc4c-6166003a0bb3\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2ml7m" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.554711 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5xwf\" (UniqueName: \"kubernetes.io/projected/734187ee-1e17-4cdc-b3bb-cfbd6e424793-kube-api-access-k5xwf\") pod \"designate-operator-controller-manager-6d9697b7f4-xw2pz\" (UID: \"734187ee-1e17-4cdc-b3bb-cfbd6e424793\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-xw2pz" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.554762 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t7vc\" (UniqueName: \"kubernetes.io/projected/50a8381e-e59b-4400-9209-c33ef4f99c23-kube-api-access-5t7vc\") pod \"horizon-operator-controller-manager-5fb775575f-98l2v\" (UID: \"50a8381e-e59b-4400-9209-c33ef4f99c23\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-98l2v" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.557467 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.597572 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.597910 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-q26cz" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.618464 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-xqcrc"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.618767 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-7hmqc" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.625392 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5xwf\" (UniqueName: \"kubernetes.io/projected/734187ee-1e17-4cdc-b3bb-cfbd6e424793-kube-api-access-k5xwf\") pod \"designate-operator-controller-manager-6d9697b7f4-xw2pz\" (UID: \"734187ee-1e17-4cdc-b3bb-cfbd6e424793\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-xw2pz" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.636416 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7ns7q" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.656612 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksppz\" (UniqueName: \"kubernetes.io/projected/50471b23-1d0d-4bd9-a66f-a89b3a39a612-kube-api-access-ksppz\") pod \"heat-operator-controller-manager-69d6db494d-xqcrc\" (UID: \"50471b23-1d0d-4bd9-a66f-a89b3a39a612\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-xqcrc" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.656731 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t7vc\" (UniqueName: \"kubernetes.io/projected/50a8381e-e59b-4400-9209-c33ef4f99c23-kube-api-access-5t7vc\") pod \"horizon-operator-controller-manager-5fb775575f-98l2v\" (UID: \"50a8381e-e59b-4400-9209-c33ef4f99c23\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-98l2v" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.656770 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert\") pod \"infra-operator-controller-manager-79955696d6-6zkvt\" (UID: \"c2cda883-37e6-4c21-b320-4962ffdc98b3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.656852 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5gkt\" (UniqueName: \"kubernetes.io/projected/c2cda883-37e6-4c21-b320-4962ffdc98b3-kube-api-access-w5gkt\") pod \"infra-operator-controller-manager-79955696d6-6zkvt\" (UID: \"c2cda883-37e6-4c21-b320-4962ffdc98b3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.661268 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb8q5\" (UniqueName: \"kubernetes.io/projected/499923d8-4593-4225-bc4c-6166003a0bb3-kube-api-access-mb8q5\") pod \"glance-operator-controller-manager-8886f4c47-2ml7m\" (UID: \"499923d8-4593-4225-bc4c-6166003a0bb3\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2ml7m" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.675673 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.682670 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-xw2pz" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.696364 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t7vc\" (UniqueName: \"kubernetes.io/projected/50a8381e-e59b-4400-9209-c33ef4f99c23-kube-api-access-5t7vc\") pod \"horizon-operator-controller-manager-5fb775575f-98l2v\" (UID: \"50a8381e-e59b-4400-9209-c33ef4f99c23\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-98l2v" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.713007 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2ml7m" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.723576 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-t584q"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.724160 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksppz\" (UniqueName: \"kubernetes.io/projected/50471b23-1d0d-4bd9-a66f-a89b3a39a612-kube-api-access-ksppz\") pod \"heat-operator-controller-manager-69d6db494d-xqcrc\" (UID: \"50471b23-1d0d-4bd9-a66f-a89b3a39a612\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-xqcrc" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.724523 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-t584q" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.730317 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-4vqwx" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.742059 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-xf5fn"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.743331 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xf5fn" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.746914 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-xqcrc" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.762315 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k46bz\" (UniqueName: \"kubernetes.io/projected/812ebcfb-766d-4a1b-aaaa-2dd5a96ce047-kube-api-access-k46bz\") pod \"ironic-operator-controller-manager-5f4b8bd54d-t584q\" (UID: \"812ebcfb-766d-4a1b-aaaa-2dd5a96ce047\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-t584q" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.762415 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert\") pod \"infra-operator-controller-manager-79955696d6-6zkvt\" (UID: \"c2cda883-37e6-4c21-b320-4962ffdc98b3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.762469 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-rtrkb" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.762470 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5gkt\" (UniqueName: \"kubernetes.io/projected/c2cda883-37e6-4c21-b320-4962ffdc98b3-kube-api-access-w5gkt\") pod \"infra-operator-controller-manager-79955696d6-6zkvt\" (UID: \"c2cda883-37e6-4c21-b320-4962ffdc98b3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" Jan 29 11:12:34 crc kubenswrapper[4593]: E0129 11:12:34.763653 4593 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 11:12:34 crc kubenswrapper[4593]: E0129 11:12:34.763714 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert podName:c2cda883-37e6-4c21-b320-4962ffdc98b3 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:35.263695675 +0000 UTC m=+821.136729866 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert") pod "infra-operator-controller-manager-79955696d6-6zkvt" (UID: "c2cda883-37e6-4c21-b320-4962ffdc98b3") : secret "infra-operator-webhook-server-cert" not found Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.763911 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-t584q"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.781075 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-xf5fn"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.828725 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-c89cq"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.829728 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c89cq" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.834260 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-29ncp" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.844867 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-zx6r8"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.845753 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-zx6r8" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.854072 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5gkt\" (UniqueName: \"kubernetes.io/projected/c2cda883-37e6-4c21-b320-4962ffdc98b3-kube-api-access-w5gkt\") pod \"infra-operator-controller-manager-79955696d6-6zkvt\" (UID: \"c2cda883-37e6-4c21-b320-4962ffdc98b3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.854528 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-ttrjz" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.861725 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-zx6r8"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.863976 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptsxk\" (UniqueName: \"kubernetes.io/projected/0881deda-c42a-48d8-9059-b7eaf66c0f9f-kube-api-access-ptsxk\") pod \"manila-operator-controller-manager-7dd968899f-c89cq\" (UID: \"0881deda-c42a-48d8-9059-b7eaf66c0f9f\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c89cq" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.864038 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbs8t\" (UniqueName: \"kubernetes.io/projected/62efedcb-a194-4692-8e84-a0da7777a400-kube-api-access-sbs8t\") pod \"mariadb-operator-controller-manager-67bf948998-zx6r8\" (UID: \"62efedcb-a194-4692-8e84-a0da7777a400\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-zx6r8" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.864113 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9lzd\" (UniqueName: \"kubernetes.io/projected/cdb96936-cd34-44fd-94b5-5570688fdfe6-kube-api-access-n9lzd\") pod \"keystone-operator-controller-manager-84f48565d4-xf5fn\" (UID: \"cdb96936-cd34-44fd-94b5-5570688fdfe6\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xf5fn" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.864176 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k46bz\" (UniqueName: \"kubernetes.io/projected/812ebcfb-766d-4a1b-aaaa-2dd5a96ce047-kube-api-access-k46bz\") pod \"ironic-operator-controller-manager-5f4b8bd54d-t584q\" (UID: \"812ebcfb-766d-4a1b-aaaa-2dd5a96ce047\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-t584q" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.880902 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-98l2v" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.883113 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-c89cq"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.899867 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-qt87l"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.900832 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-qt87l" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.912235 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-pv9gb" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.963454 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k46bz\" (UniqueName: \"kubernetes.io/projected/812ebcfb-766d-4a1b-aaaa-2dd5a96ce047-kube-api-access-k46bz\") pod \"ironic-operator-controller-manager-5f4b8bd54d-t584q\" (UID: \"812ebcfb-766d-4a1b-aaaa-2dd5a96ce047\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-t584q" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.965732 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9lzd\" (UniqueName: \"kubernetes.io/projected/cdb96936-cd34-44fd-94b5-5570688fdfe6-kube-api-access-n9lzd\") pod \"keystone-operator-controller-manager-84f48565d4-xf5fn\" (UID: \"cdb96936-cd34-44fd-94b5-5570688fdfe6\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xf5fn" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.965852 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptsxk\" (UniqueName: \"kubernetes.io/projected/0881deda-c42a-48d8-9059-b7eaf66c0f9f-kube-api-access-ptsxk\") pod \"manila-operator-controller-manager-7dd968899f-c89cq\" (UID: \"0881deda-c42a-48d8-9059-b7eaf66c0f9f\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c89cq" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.965891 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbs8t\" (UniqueName: \"kubernetes.io/projected/62efedcb-a194-4692-8e84-a0da7777a400-kube-api-access-sbs8t\") pod \"mariadb-operator-controller-manager-67bf948998-zx6r8\" (UID: \"62efedcb-a194-4692-8e84-a0da7777a400\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-zx6r8" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.976511 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-qt87l"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.983711 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-8kf6p"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.989285 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-8kf6p" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.994785 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-kfsxd" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.002733 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbs8t\" (UniqueName: \"kubernetes.io/projected/62efedcb-a194-4692-8e84-a0da7777a400-kube-api-access-sbs8t\") pod \"mariadb-operator-controller-manager-67bf948998-zx6r8\" (UID: \"62efedcb-a194-4692-8e84-a0da7777a400\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-zx6r8" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.015546 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.035058 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-8kf6p"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.035163 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.039515 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.040188 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.041032 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-v2cqr" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.043230 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.048402 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.048657 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-28sbr" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.063135 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-t584q" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.067542 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjhs7\" (UniqueName: \"kubernetes.io/projected/336c4e93-7d0b-4570-aafc-22e0f812db12-kube-api-access-qjhs7\") pod \"neutron-operator-controller-manager-585dbc889-qt87l\" (UID: \"336c4e93-7d0b-4570-aafc-22e0f812db12\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-qt87l" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.067800 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptsxk\" (UniqueName: \"kubernetes.io/projected/0881deda-c42a-48d8-9059-b7eaf66c0f9f-kube-api-access-ptsxk\") pod \"manila-operator-controller-manager-7dd968899f-c89cq\" (UID: \"0881deda-c42a-48d8-9059-b7eaf66c0f9f\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c89cq" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.069127 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9lzd\" (UniqueName: \"kubernetes.io/projected/cdb96936-cd34-44fd-94b5-5570688fdfe6-kube-api-access-n9lzd\") pod \"keystone-operator-controller-manager-84f48565d4-xf5fn\" (UID: \"cdb96936-cd34-44fd-94b5-5570688fdfe6\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xf5fn" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.073559 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-885pn"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.074499 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-885pn" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.077991 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-ztdjm" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.093592 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-885pn"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.093660 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.120250 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-kttv8"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.122177 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-kttv8"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.122277 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kttv8" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.126174 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-j4vnr" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.144202 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xf5fn" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.172064 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-k4b7q"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.173024 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-k4b7q" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.173091 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjhs7\" (UniqueName: \"kubernetes.io/projected/336c4e93-7d0b-4570-aafc-22e0f812db12-kube-api-access-qjhs7\") pod \"neutron-operator-controller-manager-585dbc889-qt87l\" (UID: \"336c4e93-7d0b-4570-aafc-22e0f812db12\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-qt87l" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.173149 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nqmf\" (UniqueName: \"kubernetes.io/projected/9b88fe2c-a82a-4284-961a-8af3818815ec-kube-api-access-5nqmf\") pod \"ovn-operator-controller-manager-788c46999f-885pn\" (UID: \"9b88fe2c-a82a-4284-961a-8af3818815ec\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-885pn" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.173182 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb\" (UID: \"f6e2fc57-0cce-4f5a-bf3e-63efbfff1073\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.173211 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2k2v\" (UniqueName: \"kubernetes.io/projected/2c7ec826-43f0-49f3-9d96-4330427e4ed9-kube-api-access-g2k2v\") pod \"placement-operator-controller-manager-5b964cf4cd-kttv8\" (UID: \"2c7ec826-43f0-49f3-9d96-4330427e4ed9\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kttv8" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.173237 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjf68\" (UniqueName: \"kubernetes.io/projected/40ab1792-0354-4c78-ac44-a217fbc610a9-kube-api-access-mjf68\") pod \"nova-operator-controller-manager-55bff696bd-8kf6p\" (UID: \"40ab1792-0354-4c78-ac44-a217fbc610a9\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-8kf6p" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.173283 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkf7m\" (UniqueName: \"kubernetes.io/projected/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-kube-api-access-bkf7m\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb\" (UID: \"f6e2fc57-0cce-4f5a-bf3e-63efbfff1073\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.173320 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89dhm\" (UniqueName: \"kubernetes.io/projected/ba6fb45a-59ff-42ee-acb0-0ee43d001e1e-kube-api-access-89dhm\") pod \"octavia-operator-controller-manager-6687f8d877-9dbds\" (UID: \"ba6fb45a-59ff-42ee-acb0-0ee43d001e1e\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.181816 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-k4b7q"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.182212 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c89cq" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.182303 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-drg7l" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.230116 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjhs7\" (UniqueName: \"kubernetes.io/projected/336c4e93-7d0b-4570-aafc-22e0f812db12-kube-api-access-qjhs7\") pod \"neutron-operator-controller-manager-585dbc889-qt87l\" (UID: \"336c4e93-7d0b-4570-aafc-22e0f812db12\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-qt87l" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.235494 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-zx6r8" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.253342 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-qt87l" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.277650 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nqmf\" (UniqueName: \"kubernetes.io/projected/9b88fe2c-a82a-4284-961a-8af3818815ec-kube-api-access-5nqmf\") pod \"ovn-operator-controller-manager-788c46999f-885pn\" (UID: \"9b88fe2c-a82a-4284-961a-8af3818815ec\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-885pn" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.277723 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb\" (UID: \"f6e2fc57-0cce-4f5a-bf3e-63efbfff1073\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.277760 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2k2v\" (UniqueName: \"kubernetes.io/projected/2c7ec826-43f0-49f3-9d96-4330427e4ed9-kube-api-access-g2k2v\") pod \"placement-operator-controller-manager-5b964cf4cd-kttv8\" (UID: \"2c7ec826-43f0-49f3-9d96-4330427e4ed9\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kttv8" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.277793 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjf68\" (UniqueName: \"kubernetes.io/projected/40ab1792-0354-4c78-ac44-a217fbc610a9-kube-api-access-mjf68\") pod \"nova-operator-controller-manager-55bff696bd-8kf6p\" (UID: \"40ab1792-0354-4c78-ac44-a217fbc610a9\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-8kf6p" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.277877 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkf7m\" (UniqueName: \"kubernetes.io/projected/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-kube-api-access-bkf7m\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb\" (UID: \"f6e2fc57-0cce-4f5a-bf3e-63efbfff1073\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.277913 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89dhm\" (UniqueName: \"kubernetes.io/projected/ba6fb45a-59ff-42ee-acb0-0ee43d001e1e-kube-api-access-89dhm\") pod \"octavia-operator-controller-manager-6687f8d877-9dbds\" (UID: \"ba6fb45a-59ff-42ee-acb0-0ee43d001e1e\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.277943 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert\") pod \"infra-operator-controller-manager-79955696d6-6zkvt\" (UID: \"c2cda883-37e6-4c21-b320-4962ffdc98b3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" Jan 29 11:12:35 crc kubenswrapper[4593]: E0129 11:12:35.278090 4593 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 11:12:35 crc kubenswrapper[4593]: E0129 11:12:35.281409 4593 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:12:35 crc kubenswrapper[4593]: E0129 11:12:35.281510 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert podName:c2cda883-37e6-4c21-b320-4962ffdc98b3 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:36.281481213 +0000 UTC m=+822.154515404 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert") pod "infra-operator-controller-manager-79955696d6-6zkvt" (UID: "c2cda883-37e6-4c21-b320-4962ffdc98b3") : secret "infra-operator-webhook-server-cert" not found Jan 29 11:12:35 crc kubenswrapper[4593]: E0129 11:12:35.281536 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert podName:f6e2fc57-0cce-4f5a-bf3e-63efbfff1073 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:35.781522714 +0000 UTC m=+821.654556905 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" (UID: "f6e2fc57-0cce-4f5a-bf3e-63efbfff1073") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.287606 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-ltfr4"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.290148 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ltfr4" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.307087 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-gjfr9" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.345187 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89dhm\" (UniqueName: \"kubernetes.io/projected/ba6fb45a-59ff-42ee-acb0-0ee43d001e1e-kube-api-access-89dhm\") pod \"octavia-operator-controller-manager-6687f8d877-9dbds\" (UID: \"ba6fb45a-59ff-42ee-acb0-0ee43d001e1e\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.355281 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkf7m\" (UniqueName: \"kubernetes.io/projected/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-kube-api-access-bkf7m\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb\" (UID: \"f6e2fc57-0cce-4f5a-bf3e-63efbfff1073\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.356204 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nqmf\" (UniqueName: \"kubernetes.io/projected/9b88fe2c-a82a-4284-961a-8af3818815ec-kube-api-access-5nqmf\") pod \"ovn-operator-controller-manager-788c46999f-885pn\" (UID: \"9b88fe2c-a82a-4284-961a-8af3818815ec\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-885pn" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.356273 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-z4mp8"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.357403 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjf68\" (UniqueName: \"kubernetes.io/projected/40ab1792-0354-4c78-ac44-a217fbc610a9-kube-api-access-mjf68\") pod \"nova-operator-controller-manager-55bff696bd-8kf6p\" (UID: \"40ab1792-0354-4c78-ac44-a217fbc610a9\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-8kf6p" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.357560 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-z4mp8" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.377737 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-8xtx9" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.378875 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.383789 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5npq\" (UniqueName: \"kubernetes.io/projected/0e86fa54-1e41-4bb9-86c7-a0ea0d919270-kube-api-access-x5npq\") pod \"swift-operator-controller-manager-68fc8c869-k4b7q\" (UID: \"0e86fa54-1e41-4bb9-86c7-a0ea0d919270\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-k4b7q" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.386127 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2k2v\" (UniqueName: \"kubernetes.io/projected/2c7ec826-43f0-49f3-9d96-4330427e4ed9-kube-api-access-g2k2v\") pod \"placement-operator-controller-manager-5b964cf4cd-kttv8\" (UID: \"2c7ec826-43f0-49f3-9d96-4330427e4ed9\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kttv8" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.452462 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-885pn" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.473552 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-ltfr4"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.478199 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kttv8" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.487406 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns5l7\" (UniqueName: \"kubernetes.io/projected/b45fb247-850e-40b4-b62e-8551d55efcba-kube-api-access-ns5l7\") pod \"test-operator-controller-manager-56f8bfcd9f-ltfr4\" (UID: \"b45fb247-850e-40b4-b62e-8551d55efcba\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ltfr4" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.487506 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jczk\" (UniqueName: \"kubernetes.io/projected/ea8d9bb8-bdec-453d-a308-28b962971254-kube-api-access-7jczk\") pod \"telemetry-operator-controller-manager-64b5b76f97-z4mp8\" (UID: \"ea8d9bb8-bdec-453d-a308-28b962971254\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-z4mp8" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.487568 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5npq\" (UniqueName: \"kubernetes.io/projected/0e86fa54-1e41-4bb9-86c7-a0ea0d919270-kube-api-access-x5npq\") pod \"swift-operator-controller-manager-68fc8c869-k4b7q\" (UID: \"0e86fa54-1e41-4bb9-86c7-a0ea0d919270\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-k4b7q" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.504351 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-z4mp8"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.522688 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-zmssx"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.523991 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-zmssx" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.526507 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5npq\" (UniqueName: \"kubernetes.io/projected/0e86fa54-1e41-4bb9-86c7-a0ea0d919270-kube-api-access-x5npq\") pod \"swift-operator-controller-manager-68fc8c869-k4b7q\" (UID: \"0e86fa54-1e41-4bb9-86c7-a0ea0d919270\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-k4b7q" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.527039 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-9hpkh" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.560452 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-zmssx"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.580968 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.581991 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.583985 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-lj4r8" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.584179 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.584303 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.591317 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jczk\" (UniqueName: \"kubernetes.io/projected/ea8d9bb8-bdec-453d-a308-28b962971254-kube-api-access-7jczk\") pod \"telemetry-operator-controller-manager-64b5b76f97-z4mp8\" (UID: \"ea8d9bb8-bdec-453d-a308-28b962971254\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-z4mp8" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.591777 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.591877 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4gqb\" (UniqueName: \"kubernetes.io/projected/0259a320-8da9-48e5-8d73-25b09774d9c1-kube-api-access-s4gqb\") pod \"watcher-operator-controller-manager-564965969-zmssx\" (UID: \"0259a320-8da9-48e5-8d73-25b09774d9c1\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-zmssx" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.592037 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns5l7\" (UniqueName: \"kubernetes.io/projected/b45fb247-850e-40b4-b62e-8551d55efcba-kube-api-access-ns5l7\") pod \"test-operator-controller-manager-56f8bfcd9f-ltfr4\" (UID: \"b45fb247-850e-40b4-b62e-8551d55efcba\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ltfr4" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.592139 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.592261 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxbkf\" (UniqueName: \"kubernetes.io/projected/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-kube-api-access-rxbkf\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.616411 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.627369 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-8kf6p" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.630779 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns5l7\" (UniqueName: \"kubernetes.io/projected/b45fb247-850e-40b4-b62e-8551d55efcba-kube-api-access-ns5l7\") pod \"test-operator-controller-manager-56f8bfcd9f-ltfr4\" (UID: \"b45fb247-850e-40b4-b62e-8551d55efcba\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ltfr4" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.651074 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tfkk2"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.651987 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tfkk2" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.658589 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jczk\" (UniqueName: \"kubernetes.io/projected/ea8d9bb8-bdec-453d-a308-28b962971254-kube-api-access-7jczk\") pod \"telemetry-operator-controller-manager-64b5b76f97-z4mp8\" (UID: \"ea8d9bb8-bdec-453d-a308-28b962971254\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-z4mp8" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.658863 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-d9bh5" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.682020 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ltfr4" Jan 29 11:12:35 crc kubenswrapper[4593]: E0129 11:12:35.694956 4593 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 11:12:35 crc kubenswrapper[4593]: E0129 11:12:35.695046 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs podName:960bb326-dc22-43e5-bc4f-05c9ce9e26a9 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:36.195027336 +0000 UTC m=+822.068061527 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs") pod "openstack-operator-controller-manager-6d898fd894-sh94p" (UID: "960bb326-dc22-43e5-bc4f-05c9ce9e26a9") : secret "webhook-server-cert" not found Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.695283 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.695314 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4gqb\" (UniqueName: \"kubernetes.io/projected/0259a320-8da9-48e5-8d73-25b09774d9c1-kube-api-access-s4gqb\") pod \"watcher-operator-controller-manager-564965969-zmssx\" (UID: \"0259a320-8da9-48e5-8d73-25b09774d9c1\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-zmssx" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.695359 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54qbk\" (UniqueName: \"kubernetes.io/projected/2f32633b-0490-4885-9543-a140c807c742-kube-api-access-54qbk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-tfkk2\" (UID: \"2f32633b-0490-4885-9543-a140c807c742\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tfkk2" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.695397 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.695431 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxbkf\" (UniqueName: \"kubernetes.io/projected/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-kube-api-access-rxbkf\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:35 crc kubenswrapper[4593]: E0129 11:12:35.695959 4593 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 11:12:35 crc kubenswrapper[4593]: E0129 11:12:35.695997 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs podName:960bb326-dc22-43e5-bc4f-05c9ce9e26a9 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:36.195984802 +0000 UTC m=+822.069018993 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs") pod "openstack-operator-controller-manager-6d898fd894-sh94p" (UID: "960bb326-dc22-43e5-bc4f-05c9ce9e26a9") : secret "metrics-server-cert" not found Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.702854 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tfkk2"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.733104 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4gqb\" (UniqueName: \"kubernetes.io/projected/0259a320-8da9-48e5-8d73-25b09774d9c1-kube-api-access-s4gqb\") pod \"watcher-operator-controller-manager-564965969-zmssx\" (UID: \"0259a320-8da9-48e5-8d73-25b09774d9c1\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-zmssx" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.734507 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-z4mp8" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.745300 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxbkf\" (UniqueName: \"kubernetes.io/projected/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-kube-api-access-rxbkf\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.796622 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb\" (UID: \"f6e2fc57-0cce-4f5a-bf3e-63efbfff1073\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.796725 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54qbk\" (UniqueName: \"kubernetes.io/projected/2f32633b-0490-4885-9543-a140c807c742-kube-api-access-54qbk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-tfkk2\" (UID: \"2f32633b-0490-4885-9543-a140c807c742\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tfkk2" Jan 29 11:12:35 crc kubenswrapper[4593]: E0129 11:12:35.798384 4593 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:12:35 crc kubenswrapper[4593]: E0129 11:12:35.798475 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert podName:f6e2fc57-0cce-4f5a-bf3e-63efbfff1073 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:36.798422278 +0000 UTC m=+822.671456469 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" (UID: "f6e2fc57-0cce-4f5a-bf3e-63efbfff1073") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.822653 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-k4b7q" Jan 29 11:12:35 crc kubenswrapper[4593]: W0129 11:12:35.839673 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5e6d3a8_d6d9_4445_9708_28b88928333e.slice/crio-b7c726b993850d0ecec767a1630667e32d3392bb46f7fbc47e63b9fc069a3777 WatchSource:0}: Error finding container b7c726b993850d0ecec767a1630667e32d3392bb46f7fbc47e63b9fc069a3777: Status 404 returned error can't find the container with id b7c726b993850d0ecec767a1630667e32d3392bb46f7fbc47e63b9fc069a3777 Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.841596 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54qbk\" (UniqueName: \"kubernetes.io/projected/2f32633b-0490-4885-9543-a140c807c742-kube-api-access-54qbk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-tfkk2\" (UID: \"2f32633b-0490-4885-9543-a140c807c742\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tfkk2" Jan 29 11:12:35 crc kubenswrapper[4593]: W0129 11:12:35.858805 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode35e9127_0337_4860_b938_bb477a408f1e.slice/crio-786f20cac1637efa8bbcc8dfc9f4b935d7a0c790d1615085c5a8596bc0419305 WatchSource:0}: Error finding container 786f20cac1637efa8bbcc8dfc9f4b935d7a0c790d1615085c5a8596bc0419305: Status 404 returned error can't find the container with id 786f20cac1637efa8bbcc8dfc9f4b935d7a0c790d1615085c5a8596bc0419305 Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.862469 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7ns7q"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.892496 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-7hmqc"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.898970 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-xw2pz"] Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.019579 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-zmssx" Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.065335 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tfkk2" Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.116500 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7ns7q" event={"ID":"c5e6d3a8-d6d9-4445-9708-28b88928333e","Type":"ContainerStarted","Data":"b7c726b993850d0ecec767a1630667e32d3392bb46f7fbc47e63b9fc069a3777"} Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.117389 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-xw2pz" event={"ID":"734187ee-1e17-4cdc-b3bb-cfbd6e424793","Type":"ContainerStarted","Data":"965153987ca6aac88bec8776c6ea464b3f89b694a3564f1126b3063b735214df"} Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.118457 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-7hmqc" event={"ID":"e35e9127-0337-4860-b938-bb477a408f1e","Type":"ContainerStarted","Data":"786f20cac1637efa8bbcc8dfc9f4b935d7a0c790d1615085c5a8596bc0419305"} Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.201220 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.201562 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.201892 4593 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.202011 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs podName:960bb326-dc22-43e5-bc4f-05c9ce9e26a9 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:37.201996953 +0000 UTC m=+823.075031144 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs") pod "openstack-operator-controller-manager-6d898fd894-sh94p" (UID: "960bb326-dc22-43e5-bc4f-05c9ce9e26a9") : secret "metrics-server-cert" not found Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.202439 4593 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.202524 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs podName:960bb326-dc22-43e5-bc4f-05c9ce9e26a9 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:37.202515556 +0000 UTC m=+823.075549747 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs") pod "openstack-operator-controller-manager-6d898fd894-sh94p" (UID: "960bb326-dc22-43e5-bc4f-05c9ce9e26a9") : secret "webhook-server-cert" not found Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.304450 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert\") pod \"infra-operator-controller-manager-79955696d6-6zkvt\" (UID: \"c2cda883-37e6-4c21-b320-4962ffdc98b3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.304619 4593 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.304695 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert podName:c2cda883-37e6-4c21-b320-4962ffdc98b3 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:38.304675515 +0000 UTC m=+824.177709706 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert") pod "infra-operator-controller-manager-79955696d6-6zkvt" (UID: "c2cda883-37e6-4c21-b320-4962ffdc98b3") : secret "infra-operator-webhook-server-cert" not found Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.372523 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-xqcrc"] Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.387007 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-2ml7m"] Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.395794 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-t584q"] Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.406103 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-c89cq"] Jan 29 11:12:36 crc kubenswrapper[4593]: W0129 11:12:36.414393 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod812ebcfb_766d_4a1b_aaaa_2dd5a96ce047.slice/crio-df5f36461026f996f4b63408ff77299522c8fb4eaca84b6d9fff3ed4bc3b7164 WatchSource:0}: Error finding container df5f36461026f996f4b63408ff77299522c8fb4eaca84b6d9fff3ed4bc3b7164: Status 404 returned error can't find the container with id df5f36461026f996f4b63408ff77299522c8fb4eaca84b6d9fff3ed4bc3b7164 Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.446182 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-98l2v"] Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.456064 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-xf5fn"] Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.745166 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-qt87l"] Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.762406 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-k4b7q"] Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.809910 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-zx6r8"] Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.815297 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb\" (UID: \"f6e2fc57-0cce-4f5a-bf3e-63efbfff1073\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.815489 4593 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.815536 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert podName:f6e2fc57-0cce-4f5a-bf3e-63efbfff1073 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:38.815521423 +0000 UTC m=+824.688555614 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" (UID: "f6e2fc57-0cce-4f5a-bf3e-63efbfff1073") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.846253 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-885pn"] Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.868763 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-z4mp8"] Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.874062 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-kttv8"] Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.877938 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds"] Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.882739 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-8kf6p"] Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.886971 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ns5l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-ltfr4_openstack-operators(b45fb247-850e-40b4-b62e-8551d55efcba): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.888196 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ltfr4" podUID="b45fb247-850e-40b4-b62e-8551d55efcba" Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.905004 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s4gqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-zmssx_openstack-operators(0259a320-8da9-48e5-8d73-25b09774d9c1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.907322 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-zmssx" podUID="0259a320-8da9-48e5-8d73-25b09774d9c1" Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.907485 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mjf68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-8kf6p_openstack-operators(40ab1792-0354-4c78-ac44-a217fbc610a9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.907486 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-89dhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6687f8d877-9dbds_openstack-operators(ba6fb45a-59ff-42ee-acb0-0ee43d001e1e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.908709 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-8kf6p" podUID="40ab1792-0354-4c78-ac44-a217fbc610a9" Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.908740 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds" podUID="ba6fb45a-59ff-42ee-acb0-0ee43d001e1e" Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.912341 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-54qbk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-tfkk2_openstack-operators(2f32633b-0490-4885-9543-a140c807c742): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.915268 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tfkk2" podUID="2f32633b-0490-4885-9543-a140c807c742" Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.934310 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-ltfr4"] Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.942721 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-zmssx"] Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.955187 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tfkk2"] Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.128750 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds" event={"ID":"ba6fb45a-59ff-42ee-acb0-0ee43d001e1e","Type":"ContainerStarted","Data":"b6904b122aa43e6bfe8e8f8a8012d3bcb9a23b1ca090ef3aad98496517e2db56"} Jan 29 11:12:37 crc kubenswrapper[4593]: E0129 11:12:37.129856 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds" podUID="ba6fb45a-59ff-42ee-acb0-0ee43d001e1e" Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.130316 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xf5fn" event={"ID":"cdb96936-cd34-44fd-94b5-5570688fdfe6","Type":"ContainerStarted","Data":"b57c48584683a7b772fb34becddc58db9678326e8edb615515f279fff1c48fa7"} Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.133709 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kttv8" event={"ID":"2c7ec826-43f0-49f3-9d96-4330427e4ed9","Type":"ContainerStarted","Data":"582c2d7f177ec4cfde444c5f91fb5f538f8433bdb119026844f9e6f8a9afdb15"} Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.135495 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c89cq" event={"ID":"0881deda-c42a-48d8-9059-b7eaf66c0f9f","Type":"ContainerStarted","Data":"4bc0aa79b3876fa5d3ab832ecbfad28227117613b1b79f5d10a9b94f8b4e877e"} Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.137016 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-98l2v" event={"ID":"50a8381e-e59b-4400-9209-c33ef4f99c23","Type":"ContainerStarted","Data":"dd09c96251cf7561fa20be69218c5d25a25dba5a7216d037bb115aa599824c5b"} Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.157059 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-qt87l" event={"ID":"336c4e93-7d0b-4570-aafc-22e0f812db12","Type":"ContainerStarted","Data":"a820fc0f0d271023af320c507058fdac3ab434ba6c76ffad7488457a52d75bd1"} Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.159048 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-zmssx" event={"ID":"0259a320-8da9-48e5-8d73-25b09774d9c1","Type":"ContainerStarted","Data":"da266e037f1b44105a24231dff74753f4daa8e8e13109ed35943b4a4f035d3fc"} Jan 29 11:12:37 crc kubenswrapper[4593]: E0129 11:12:37.162992 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-zmssx" podUID="0259a320-8da9-48e5-8d73-25b09774d9c1" Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.163993 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-z4mp8" event={"ID":"ea8d9bb8-bdec-453d-a308-28b962971254","Type":"ContainerStarted","Data":"8cd6cd11f94ddece266f00c5871f4c069288985d2333a6f1fd538ed5232edae2"} Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.179443 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2ml7m" event={"ID":"499923d8-4593-4225-bc4c-6166003a0bb3","Type":"ContainerStarted","Data":"b695db3e07b3495e141f68edcb1032b6e88dbd5ce50caf474deafd692bb9303c"} Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.184735 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-t584q" event={"ID":"812ebcfb-766d-4a1b-aaaa-2dd5a96ce047","Type":"ContainerStarted","Data":"df5f36461026f996f4b63408ff77299522c8fb4eaca84b6d9fff3ed4bc3b7164"} Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.193990 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tfkk2" event={"ID":"2f32633b-0490-4885-9543-a140c807c742","Type":"ContainerStarted","Data":"57983b33b9c4365af458eb0a487a37e898ce0961793a79dcde8f7dee293c0035"} Jan 29 11:12:37 crc kubenswrapper[4593]: E0129 11:12:37.195353 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tfkk2" podUID="2f32633b-0490-4885-9543-a140c807c742" Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.195680 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-k4b7q" event={"ID":"0e86fa54-1e41-4bb9-86c7-a0ea0d919270","Type":"ContainerStarted","Data":"7e199caad175b7645f2e173d45a257d98ed4b7bad605f6d3b4f4bb3eb3b6804b"} Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.201581 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-xqcrc" event={"ID":"50471b23-1d0d-4bd9-a66f-a89b3a39a612","Type":"ContainerStarted","Data":"29a4ccf3e7a9396fff270675aaf15dcb46f48c28d1f6813e5fcf208efd72db60"} Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.203491 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ltfr4" event={"ID":"b45fb247-850e-40b4-b62e-8551d55efcba","Type":"ContainerStarted","Data":"fe7fa25a28f3eb925519b80a9193c791f8b156af0045d9f6e3d2f1039ec86900"} Jan 29 11:12:37 crc kubenswrapper[4593]: E0129 11:12:37.212817 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ltfr4" podUID="b45fb247-850e-40b4-b62e-8551d55efcba" Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.214832 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-8kf6p" event={"ID":"40ab1792-0354-4c78-ac44-a217fbc610a9","Type":"ContainerStarted","Data":"0ede2967655f210367648677750ecf2a3054e4c19502eb303c694da0e5d91abc"} Jan 29 11:12:37 crc kubenswrapper[4593]: E0129 11:12:37.216749 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-8kf6p" podUID="40ab1792-0354-4c78-ac44-a217fbc610a9" Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.227202 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.227404 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:37 crc kubenswrapper[4593]: E0129 11:12:37.230195 4593 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 11:12:37 crc kubenswrapper[4593]: E0129 11:12:37.230248 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs podName:960bb326-dc22-43e5-bc4f-05c9ce9e26a9 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:39.230232357 +0000 UTC m=+825.103266548 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs") pod "openstack-operator-controller-manager-6d898fd894-sh94p" (UID: "960bb326-dc22-43e5-bc4f-05c9ce9e26a9") : secret "metrics-server-cert" not found Jan 29 11:12:37 crc kubenswrapper[4593]: E0129 11:12:37.230311 4593 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 11:12:37 crc kubenswrapper[4593]: E0129 11:12:37.230371 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs podName:960bb326-dc22-43e5-bc4f-05c9ce9e26a9 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:39.230353051 +0000 UTC m=+825.103387272 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs") pod "openstack-operator-controller-manager-6d898fd894-sh94p" (UID: "960bb326-dc22-43e5-bc4f-05c9ce9e26a9") : secret "webhook-server-cert" not found Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.232032 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-885pn" event={"ID":"9b88fe2c-a82a-4284-961a-8af3818815ec","Type":"ContainerStarted","Data":"018f12c4d542f62ba0c41899892c28cfae8b1ba0a417cce1c065adabc73c7289"} Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.233888 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-zx6r8" event={"ID":"62efedcb-a194-4692-8e84-a0da7777a400","Type":"ContainerStarted","Data":"dc08e9cc530f50716a46502f0ac25e8a9245724d249bdbf70860fbbffeb17f31"} Jan 29 11:12:38 crc kubenswrapper[4593]: E0129 11:12:38.243508 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-zmssx" podUID="0259a320-8da9-48e5-8d73-25b09774d9c1" Jan 29 11:12:38 crc kubenswrapper[4593]: E0129 11:12:38.244291 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ltfr4" podUID="b45fb247-850e-40b4-b62e-8551d55efcba" Jan 29 11:12:38 crc kubenswrapper[4593]: E0129 11:12:38.249272 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds" podUID="ba6fb45a-59ff-42ee-acb0-0ee43d001e1e" Jan 29 11:12:38 crc kubenswrapper[4593]: E0129 11:12:38.249323 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tfkk2" podUID="2f32633b-0490-4885-9543-a140c807c742" Jan 29 11:12:38 crc kubenswrapper[4593]: E0129 11:12:38.249338 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-8kf6p" podUID="40ab1792-0354-4c78-ac44-a217fbc610a9" Jan 29 11:12:38 crc kubenswrapper[4593]: I0129 11:12:38.343429 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert\") pod \"infra-operator-controller-manager-79955696d6-6zkvt\" (UID: \"c2cda883-37e6-4c21-b320-4962ffdc98b3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" Jan 29 11:12:38 crc kubenswrapper[4593]: E0129 11:12:38.343627 4593 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 11:12:38 crc kubenswrapper[4593]: E0129 11:12:38.343692 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert podName:c2cda883-37e6-4c21-b320-4962ffdc98b3 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:42.343675151 +0000 UTC m=+828.216709342 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert") pod "infra-operator-controller-manager-79955696d6-6zkvt" (UID: "c2cda883-37e6-4c21-b320-4962ffdc98b3") : secret "infra-operator-webhook-server-cert" not found Jan 29 11:12:38 crc kubenswrapper[4593]: I0129 11:12:38.850875 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb\" (UID: \"f6e2fc57-0cce-4f5a-bf3e-63efbfff1073\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" Jan 29 11:12:38 crc kubenswrapper[4593]: E0129 11:12:38.851104 4593 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:12:38 crc kubenswrapper[4593]: E0129 11:12:38.851159 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert podName:f6e2fc57-0cce-4f5a-bf3e-63efbfff1073 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:42.85114279 +0000 UTC m=+828.724176981 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" (UID: "f6e2fc57-0cce-4f5a-bf3e-63efbfff1073") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:12:39 crc kubenswrapper[4593]: I0129 11:12:39.257720 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:39 crc kubenswrapper[4593]: I0129 11:12:39.257804 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:39 crc kubenswrapper[4593]: E0129 11:12:39.257896 4593 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 11:12:39 crc kubenswrapper[4593]: E0129 11:12:39.257904 4593 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 11:12:39 crc kubenswrapper[4593]: E0129 11:12:39.258011 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs podName:960bb326-dc22-43e5-bc4f-05c9ce9e26a9 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:43.25797989 +0000 UTC m=+829.131014081 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs") pod "openstack-operator-controller-manager-6d898fd894-sh94p" (UID: "960bb326-dc22-43e5-bc4f-05c9ce9e26a9") : secret "webhook-server-cert" not found Jan 29 11:12:39 crc kubenswrapper[4593]: E0129 11:12:39.258088 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs podName:960bb326-dc22-43e5-bc4f-05c9ce9e26a9 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:43.258038822 +0000 UTC m=+829.131073073 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs") pod "openstack-operator-controller-manager-6d898fd894-sh94p" (UID: "960bb326-dc22-43e5-bc4f-05c9ce9e26a9") : secret "metrics-server-cert" not found Jan 29 11:12:42 crc kubenswrapper[4593]: I0129 11:12:42.421912 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert\") pod \"infra-operator-controller-manager-79955696d6-6zkvt\" (UID: \"c2cda883-37e6-4c21-b320-4962ffdc98b3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" Jan 29 11:12:42 crc kubenswrapper[4593]: E0129 11:12:42.422233 4593 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 11:12:42 crc kubenswrapper[4593]: E0129 11:12:42.422508 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert podName:c2cda883-37e6-4c21-b320-4962ffdc98b3 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:50.422474221 +0000 UTC m=+836.295508452 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert") pod "infra-operator-controller-manager-79955696d6-6zkvt" (UID: "c2cda883-37e6-4c21-b320-4962ffdc98b3") : secret "infra-operator-webhook-server-cert" not found Jan 29 11:12:42 crc kubenswrapper[4593]: I0129 11:12:42.931149 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb\" (UID: \"f6e2fc57-0cce-4f5a-bf3e-63efbfff1073\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" Jan 29 11:12:42 crc kubenswrapper[4593]: E0129 11:12:42.931398 4593 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:12:42 crc kubenswrapper[4593]: E0129 11:12:42.931491 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert podName:f6e2fc57-0cce-4f5a-bf3e-63efbfff1073 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:50.93146165 +0000 UTC m=+836.804495841 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" (UID: "f6e2fc57-0cce-4f5a-bf3e-63efbfff1073") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:12:43 crc kubenswrapper[4593]: I0129 11:12:43.337930 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:43 crc kubenswrapper[4593]: I0129 11:12:43.338008 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:43 crc kubenswrapper[4593]: E0129 11:12:43.338150 4593 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 11:12:43 crc kubenswrapper[4593]: E0129 11:12:43.338237 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs podName:960bb326-dc22-43e5-bc4f-05c9ce9e26a9 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:51.338214368 +0000 UTC m=+837.211248559 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs") pod "openstack-operator-controller-manager-6d898fd894-sh94p" (UID: "960bb326-dc22-43e5-bc4f-05c9ce9e26a9") : secret "webhook-server-cert" not found Jan 29 11:12:43 crc kubenswrapper[4593]: E0129 11:12:43.338154 4593 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 11:12:43 crc kubenswrapper[4593]: E0129 11:12:43.338286 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs podName:960bb326-dc22-43e5-bc4f-05c9ce9e26a9 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:51.33827585 +0000 UTC m=+837.211310041 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs") pod "openstack-operator-controller-manager-6d898fd894-sh94p" (UID: "960bb326-dc22-43e5-bc4f-05c9ce9e26a9") : secret "metrics-server-cert" not found Jan 29 11:12:47 crc kubenswrapper[4593]: E0129 11:12:47.481089 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4" Jan 29 11:12:47 crc kubenswrapper[4593]: E0129 11:12:47.481565 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mb8q5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-8886f4c47-2ml7m_openstack-operators(499923d8-4593-4225-bc4c-6166003a0bb3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:12:47 crc kubenswrapper[4593]: E0129 11:12:47.482921 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2ml7m" podUID="499923d8-4593-4225-bc4c-6166003a0bb3" Jan 29 11:12:48 crc kubenswrapper[4593]: E0129 11:12:48.314925 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4\\\"\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2ml7m" podUID="499923d8-4593-4225-bc4c-6166003a0bb3" Jan 29 11:12:50 crc kubenswrapper[4593]: I0129 11:12:50.448845 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert\") pod \"infra-operator-controller-manager-79955696d6-6zkvt\" (UID: \"c2cda883-37e6-4c21-b320-4962ffdc98b3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" Jan 29 11:12:50 crc kubenswrapper[4593]: E0129 11:12:50.449002 4593 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 11:12:50 crc kubenswrapper[4593]: E0129 11:12:50.449456 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert podName:c2cda883-37e6-4c21-b320-4962ffdc98b3 nodeName:}" failed. No retries permitted until 2026-01-29 11:13:06.449427685 +0000 UTC m=+852.322461896 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert") pod "infra-operator-controller-manager-79955696d6-6zkvt" (UID: "c2cda883-37e6-4c21-b320-4962ffdc98b3") : secret "infra-operator-webhook-server-cert" not found Jan 29 11:12:50 crc kubenswrapper[4593]: E0129 11:12:50.907738 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4" Jan 29 11:12:50 crc kubenswrapper[4593]: E0129 11:12:50.907938 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5nqmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-885pn_openstack-operators(9b88fe2c-a82a-4284-961a-8af3818815ec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:12:50 crc kubenswrapper[4593]: E0129 11:12:50.909160 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-885pn" podUID="9b88fe2c-a82a-4284-961a-8af3818815ec" Jan 29 11:12:50 crc kubenswrapper[4593]: I0129 11:12:50.955932 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb\" (UID: \"f6e2fc57-0cce-4f5a-bf3e-63efbfff1073\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" Jan 29 11:12:50 crc kubenswrapper[4593]: E0129 11:12:50.956058 4593 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:12:50 crc kubenswrapper[4593]: E0129 11:12:50.956123 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert podName:f6e2fc57-0cce-4f5a-bf3e-63efbfff1073 nodeName:}" failed. No retries permitted until 2026-01-29 11:13:06.956104104 +0000 UTC m=+852.829138295 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" (UID: "f6e2fc57-0cce-4f5a-bf3e-63efbfff1073") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:12:51 crc kubenswrapper[4593]: E0129 11:12:51.337170 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-885pn" podUID="9b88fe2c-a82a-4284-961a-8af3818815ec" Jan 29 11:12:51 crc kubenswrapper[4593]: I0129 11:12:51.361110 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:51 crc kubenswrapper[4593]: I0129 11:12:51.361182 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:51 crc kubenswrapper[4593]: E0129 11:12:51.361826 4593 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 11:12:51 crc kubenswrapper[4593]: E0129 11:12:51.361892 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs podName:960bb326-dc22-43e5-bc4f-05c9ce9e26a9 nodeName:}" failed. No retries permitted until 2026-01-29 11:13:07.361873746 +0000 UTC m=+853.234907937 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs") pod "openstack-operator-controller-manager-6d898fd894-sh94p" (UID: "960bb326-dc22-43e5-bc4f-05c9ce9e26a9") : secret "webhook-server-cert" not found Jan 29 11:12:51 crc kubenswrapper[4593]: E0129 11:12:51.361974 4593 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 11:12:51 crc kubenswrapper[4593]: E0129 11:12:51.362006 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs podName:960bb326-dc22-43e5-bc4f-05c9ce9e26a9 nodeName:}" failed. No retries permitted until 2026-01-29 11:13:07.36199746 +0000 UTC m=+853.235031641 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs") pod "openstack-operator-controller-manager-6d898fd894-sh94p" (UID: "960bb326-dc22-43e5-bc4f-05c9ce9e26a9") : secret "metrics-server-cert" not found Jan 29 11:12:51 crc kubenswrapper[4593]: E0129 11:12:51.596771 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a" Jan 29 11:12:51 crc kubenswrapper[4593]: E0129 11:12:51.596988 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7jczk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-z4mp8_openstack-operators(ea8d9bb8-bdec-453d-a308-28b962971254): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:12:51 crc kubenswrapper[4593]: E0129 11:12:51.598162 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-z4mp8" podUID="ea8d9bb8-bdec-453d-a308-28b962971254" Jan 29 11:12:52 crc kubenswrapper[4593]: E0129 11:12:52.343556 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-z4mp8" podUID="ea8d9bb8-bdec-453d-a308-28b962971254" Jan 29 11:12:54 crc kubenswrapper[4593]: E0129 11:12:54.358522 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566" Jan 29 11:12:54 crc kubenswrapper[4593]: E0129 11:12:54.359087 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ptsxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7dd968899f-c89cq_openstack-operators(0881deda-c42a-48d8-9059-b7eaf66c0f9f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:12:54 crc kubenswrapper[4593]: E0129 11:12:54.360266 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c89cq" podUID="0881deda-c42a-48d8-9059-b7eaf66c0f9f" Jan 29 11:12:55 crc kubenswrapper[4593]: E0129 11:12:55.365010 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c89cq" podUID="0881deda-c42a-48d8-9059-b7eaf66c0f9f" Jan 29 11:12:55 crc kubenswrapper[4593]: E0129 11:12:55.568074 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10" Jan 29 11:12:55 crc kubenswrapper[4593]: E0129 11:12:55.568294 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ksppz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69d6db494d-xqcrc_openstack-operators(50471b23-1d0d-4bd9-a66f-a89b3a39a612): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:12:55 crc kubenswrapper[4593]: E0129 11:12:55.570297 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-xqcrc" podUID="50471b23-1d0d-4bd9-a66f-a89b3a39a612" Jan 29 11:12:56 crc kubenswrapper[4593]: E0129 11:12:56.370453 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-xqcrc" podUID="50471b23-1d0d-4bd9-a66f-a89b3a39a612" Jan 29 11:12:57 crc kubenswrapper[4593]: E0129 11:12:57.752043 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488" Jan 29 11:12:57 crc kubenswrapper[4593]: E0129 11:12:57.752243 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g2k2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-kttv8_openstack-operators(2c7ec826-43f0-49f3-9d96-4330427e4ed9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:12:57 crc kubenswrapper[4593]: E0129 11:12:57.753319 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kttv8" podUID="2c7ec826-43f0-49f3-9d96-4330427e4ed9" Jan 29 11:12:58 crc kubenswrapper[4593]: E0129 11:12:58.291353 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521" Jan 29 11:12:58 crc kubenswrapper[4593]: E0129 11:12:58.291598 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k46bz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-5f4b8bd54d-t584q_openstack-operators(812ebcfb-766d-4a1b-aaaa-2dd5a96ce047): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:12:58 crc kubenswrapper[4593]: E0129 11:12:58.292816 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-t584q" podUID="812ebcfb-766d-4a1b-aaaa-2dd5a96ce047" Jan 29 11:12:58 crc kubenswrapper[4593]: E0129 11:12:58.382501 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-t584q" podUID="812ebcfb-766d-4a1b-aaaa-2dd5a96ce047" Jan 29 11:12:58 crc kubenswrapper[4593]: E0129 11:12:58.384273 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kttv8" podUID="2c7ec826-43f0-49f3-9d96-4330427e4ed9" Jan 29 11:12:58 crc kubenswrapper[4593]: E0129 11:12:58.859443 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382" Jan 29 11:12:58 crc kubenswrapper[4593]: E0129 11:12:58.860668 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x5npq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-k4b7q_openstack-operators(0e86fa54-1e41-4bb9-86c7-a0ea0d919270): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:12:58 crc kubenswrapper[4593]: E0129 11:12:58.861868 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-k4b7q" podUID="0e86fa54-1e41-4bb9-86c7-a0ea0d919270" Jan 29 11:12:59 crc kubenswrapper[4593]: E0129 11:12:59.388120 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-k4b7q" podUID="0e86fa54-1e41-4bb9-86c7-a0ea0d919270" Jan 29 11:13:01 crc kubenswrapper[4593]: E0129 11:13:01.539556 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf" Jan 29 11:13:01 crc kubenswrapper[4593]: E0129 11:13:01.544706 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sbs8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-zx6r8_openstack-operators(62efedcb-a194-4692-8e84-a0da7777a400): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:13:01 crc kubenswrapper[4593]: E0129 11:13:01.547560 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-zx6r8" podUID="62efedcb-a194-4692-8e84-a0da7777a400" Jan 29 11:13:02 crc kubenswrapper[4593]: E0129 11:13:02.119163 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Jan 29 11:13:02 crc kubenswrapper[4593]: E0129 11:13:02.119851 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n9lzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-xf5fn_openstack-operators(cdb96936-cd34-44fd-94b5-5570688fdfe6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:13:02 crc kubenswrapper[4593]: E0129 11:13:02.120996 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xf5fn" podUID="cdb96936-cd34-44fd-94b5-5570688fdfe6" Jan 29 11:13:02 crc kubenswrapper[4593]: E0129 11:13:02.531686 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-zx6r8" podUID="62efedcb-a194-4692-8e84-a0da7777a400" Jan 29 11:13:02 crc kubenswrapper[4593]: E0129 11:13:02.539204 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xf5fn" podUID="cdb96936-cd34-44fd-94b5-5570688fdfe6" Jan 29 11:13:03 crc kubenswrapper[4593]: I0129 11:13:03.946547 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:13:03 crc kubenswrapper[4593]: I0129 11:13:03.946665 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:13:03 crc kubenswrapper[4593]: I0129 11:13:03.946733 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 11:13:03 crc kubenswrapper[4593]: I0129 11:13:03.947509 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"61a3ea70115ab5b387eba2a0b23159462567f420ec0f4cfd86c804f4a4ced4d2"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:13:03 crc kubenswrapper[4593]: I0129 11:13:03.947704 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://61a3ea70115ab5b387eba2a0b23159462567f420ec0f4cfd86c804f4a4ced4d2" gracePeriod=600 Jan 29 11:13:05 crc kubenswrapper[4593]: I0129 11:13:05.554251 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="61a3ea70115ab5b387eba2a0b23159462567f420ec0f4cfd86c804f4a4ced4d2" exitCode=0 Jan 29 11:13:05 crc kubenswrapper[4593]: I0129 11:13:05.554324 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"61a3ea70115ab5b387eba2a0b23159462567f420ec0f4cfd86c804f4a4ced4d2"} Jan 29 11:13:05 crc kubenswrapper[4593]: I0129 11:13:05.554624 4593 scope.go:117] "RemoveContainer" containerID="ad7eaa6d8b75487d2b1860d56574f3e98a7f997d74c38ceba49998dcdb20364d" Jan 29 11:13:06 crc kubenswrapper[4593]: E0129 11:13:06.276902 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be" Jan 29 11:13:06 crc kubenswrapper[4593]: E0129 11:13:06.277091 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-89dhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6687f8d877-9dbds_openstack-operators(ba6fb45a-59ff-42ee-acb0-0ee43d001e1e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:13:06 crc kubenswrapper[4593]: E0129 11:13:06.278294 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds" podUID="ba6fb45a-59ff-42ee-acb0-0ee43d001e1e" Jan 29 11:13:06 crc kubenswrapper[4593]: I0129 11:13:06.500843 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert\") pod \"infra-operator-controller-manager-79955696d6-6zkvt\" (UID: \"c2cda883-37e6-4c21-b320-4962ffdc98b3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" Jan 29 11:13:06 crc kubenswrapper[4593]: I0129 11:13:06.522248 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert\") pod \"infra-operator-controller-manager-79955696d6-6zkvt\" (UID: \"c2cda883-37e6-4c21-b320-4962ffdc98b3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" Jan 29 11:13:06 crc kubenswrapper[4593]: I0129 11:13:06.739212 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-q26cz" Jan 29 11:13:06 crc kubenswrapper[4593]: I0129 11:13:06.747700 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" Jan 29 11:13:06 crc kubenswrapper[4593]: E0129 11:13:06.871468 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b" Jan 29 11:13:06 crc kubenswrapper[4593]: E0129 11:13:06.871840 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s4gqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-zmssx_openstack-operators(0259a320-8da9-48e5-8d73-25b09774d9c1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:13:06 crc kubenswrapper[4593]: E0129 11:13:06.874210 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-zmssx" podUID="0259a320-8da9-48e5-8d73-25b09774d9c1" Jan 29 11:13:07 crc kubenswrapper[4593]: I0129 11:13:07.007367 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb\" (UID: \"f6e2fc57-0cce-4f5a-bf3e-63efbfff1073\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" Jan 29 11:13:07 crc kubenswrapper[4593]: I0129 11:13:07.011412 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb\" (UID: \"f6e2fc57-0cce-4f5a-bf3e-63efbfff1073\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" Jan 29 11:13:07 crc kubenswrapper[4593]: I0129 11:13:07.222357 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-28sbr" Jan 29 11:13:07 crc kubenswrapper[4593]: I0129 11:13:07.231028 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" Jan 29 11:13:07 crc kubenswrapper[4593]: I0129 11:13:07.413422 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:13:07 crc kubenswrapper[4593]: I0129 11:13:07.413804 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:13:07 crc kubenswrapper[4593]: I0129 11:13:07.420416 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:13:07 crc kubenswrapper[4593]: I0129 11:13:07.420694 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:13:07 crc kubenswrapper[4593]: I0129 11:13:07.544618 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-lj4r8" Jan 29 11:13:07 crc kubenswrapper[4593]: I0129 11:13:07.553175 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:13:09 crc kubenswrapper[4593]: E0129 11:13:09.230008 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 29 11:13:09 crc kubenswrapper[4593]: E0129 11:13:09.230897 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-54qbk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-tfkk2_openstack-operators(2f32633b-0490-4885-9543-a140c807c742): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:13:09 crc kubenswrapper[4593]: E0129 11:13:09.232110 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tfkk2" podUID="2f32633b-0490-4885-9543-a140c807c742" Jan 29 11:13:09 crc kubenswrapper[4593]: I0129 11:13:09.670896 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt"] Jan 29 11:13:09 crc kubenswrapper[4593]: I0129 11:13:09.954602 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb"] Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.122818 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p"] Jan 29 11:13:10 crc kubenswrapper[4593]: W0129 11:13:10.152726 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod960bb326_dc22_43e5_bc4f_05c9ce9e26a9.slice/crio-a8ecc64fe66ad37e78eb694646bbd9238ebfb6f71be8ee350adf900b53337dce WatchSource:0}: Error finding container a8ecc64fe66ad37e78eb694646bbd9238ebfb6f71be8ee350adf900b53337dce: Status 404 returned error can't find the container with id a8ecc64fe66ad37e78eb694646bbd9238ebfb6f71be8ee350adf900b53337dce Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.677978 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c89cq" event={"ID":"0881deda-c42a-48d8-9059-b7eaf66c0f9f","Type":"ContainerStarted","Data":"e395f982bfa07a71d1aa775488c937505a4ada3659c8a3636bb859871634c770"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.679137 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c89cq" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.692050 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-xqcrc" event={"ID":"50471b23-1d0d-4bd9-a66f-a89b3a39a612","Type":"ContainerStarted","Data":"b231b187705c9af3e3ae611acabe98946d39cdff466dec66822fc7e563b85228"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.692317 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-xqcrc" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.701770 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-98l2v" event={"ID":"50a8381e-e59b-4400-9209-c33ef4f99c23","Type":"ContainerStarted","Data":"ce69171986ca0b12a3f4ac966fd11a910974d71a94f7229909ad2a3889479412"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.702575 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-98l2v" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.710398 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-8kf6p" event={"ID":"40ab1792-0354-4c78-ac44-a217fbc610a9","Type":"ContainerStarted","Data":"4fd87f5b6d25adeb291e3d201cbaf541da2bd334f0ef25741c61cc6cdde84fe6"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.710884 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-8kf6p" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.713679 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c89cq" podStartSLOduration=3.757414551 podStartE2EDuration="36.713659336s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.417411639 +0000 UTC m=+822.290445830" lastFinishedPulling="2026-01-29 11:13:09.373656424 +0000 UTC m=+855.246690615" observedRunningTime="2026-01-29 11:13:10.710867571 +0000 UTC m=+856.583901762" watchObservedRunningTime="2026-01-29 11:13:10.713659336 +0000 UTC m=+856.586693547" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.717503 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" event={"ID":"f6e2fc57-0cce-4f5a-bf3e-63efbfff1073","Type":"ContainerStarted","Data":"bc4a3768fa1c9cca4812d193310cac28fcbf1805af95c04e1a9386ba634aae79"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.731151 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-z4mp8" event={"ID":"ea8d9bb8-bdec-453d-a308-28b962971254","Type":"ContainerStarted","Data":"4a8735b1c5a5e878884c825469cc70b09b364da8e3d7918b0de752bfddf419a3"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.731867 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-z4mp8" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.733961 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2ml7m" event={"ID":"499923d8-4593-4225-bc4c-6166003a0bb3","Type":"ContainerStarted","Data":"62e93778726a1f41355dbbf7285244bf9bb1f28814e7a5be4edd90d02a79250e"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.734407 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2ml7m" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.735730 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" event={"ID":"960bb326-dc22-43e5-bc4f-05c9ce9e26a9","Type":"ContainerStarted","Data":"d161ff8604ed6842d1b926313fb9ce28b0699c4b7ecd9d89b39cb0417ed598de"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.735757 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" event={"ID":"960bb326-dc22-43e5-bc4f-05c9ce9e26a9","Type":"ContainerStarted","Data":"a8ecc64fe66ad37e78eb694646bbd9238ebfb6f71be8ee350adf900b53337dce"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.736238 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.739028 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7ns7q" event={"ID":"c5e6d3a8-d6d9-4445-9708-28b88928333e","Type":"ContainerStarted","Data":"e84cbf3484cac3ce8eddf8160f2011836e78be1faec794bd083be1721d2abcb6"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.739565 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7ns7q" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.745502 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"8d1f98c41c3fc4853c4e68bc7e91b4d8483a47efb5351d8fdb5ff5ec5ce9a38d"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.746654 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-8kf6p" podStartSLOduration=4.339052038 podStartE2EDuration="36.746640671s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.907392395 +0000 UTC m=+822.780426586" lastFinishedPulling="2026-01-29 11:13:09.314981028 +0000 UTC m=+855.188015219" observedRunningTime="2026-01-29 11:13:10.743913128 +0000 UTC m=+856.616947319" watchObservedRunningTime="2026-01-29 11:13:10.746640671 +0000 UTC m=+856.619674862" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.747464 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-xw2pz" event={"ID":"734187ee-1e17-4cdc-b3bb-cfbd6e424793","Type":"ContainerStarted","Data":"6497dccb3a34f47dd9bbd0fb8434cef415eec621b65f013293f1df2be85fb4c8"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.747836 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-xw2pz" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.748729 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-7hmqc" event={"ID":"e35e9127-0337-4860-b938-bb477a408f1e","Type":"ContainerStarted","Data":"0626d55e873d56eda1b1771a724c1d55292071d479a657ee58d4b21362b1033f"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.749048 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-7hmqc" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.769910 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" event={"ID":"c2cda883-37e6-4c21-b320-4962ffdc98b3","Type":"ContainerStarted","Data":"be00a3caffe19975b470e0e50b2a718bbd85fb7eba28c115a53731c77e7cbe98"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.792191 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-qt87l" event={"ID":"336c4e93-7d0b-4570-aafc-22e0f812db12","Type":"ContainerStarted","Data":"40d45fbb9de216994de45466c292c4b042e477acb167d5cd19427c458a4db60d"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.793016 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-qt87l" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.803268 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-xqcrc" podStartSLOduration=3.384569566 podStartE2EDuration="36.803250572s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.397716279 +0000 UTC m=+822.270750470" lastFinishedPulling="2026-01-29 11:13:09.816397285 +0000 UTC m=+855.689431476" observedRunningTime="2026-01-29 11:13:10.793187212 +0000 UTC m=+856.666221403" watchObservedRunningTime="2026-01-29 11:13:10.803250572 +0000 UTC m=+856.676284763" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.810432 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ltfr4" event={"ID":"b45fb247-850e-40b4-b62e-8551d55efcba","Type":"ContainerStarted","Data":"e50ab71589fb968b76137a627ecacb4e8d703634656004c9b0b230eac132891c"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.811253 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ltfr4" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.840421 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-98l2v" podStartSLOduration=10.078805015 podStartE2EDuration="36.840384919s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.456499063 +0000 UTC m=+822.329533254" lastFinishedPulling="2026-01-29 11:13:03.218078967 +0000 UTC m=+849.091113158" observedRunningTime="2026-01-29 11:13:10.834267875 +0000 UTC m=+856.707302066" watchObservedRunningTime="2026-01-29 11:13:10.840384919 +0000 UTC m=+856.713419110" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.841794 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-885pn" event={"ID":"9b88fe2c-a82a-4284-961a-8af3818815ec","Type":"ContainerStarted","Data":"3789ccba04697340b75376fc150b0baf7a2392f0058aa4ae83348b4fb42b45cf"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.842184 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-885pn" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.955832 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-z4mp8" podStartSLOduration=4.358213881 podStartE2EDuration="36.955792929s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.877413767 +0000 UTC m=+822.750447958" lastFinishedPulling="2026-01-29 11:13:09.474992805 +0000 UTC m=+855.348027006" observedRunningTime="2026-01-29 11:13:10.938059183 +0000 UTC m=+856.811093394" watchObservedRunningTime="2026-01-29 11:13:10.955792929 +0000 UTC m=+856.828827110" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.958263 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-xw2pz" podStartSLOduration=11.399390052 podStartE2EDuration="36.958241694s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:35.973768535 +0000 UTC m=+821.846802726" lastFinishedPulling="2026-01-29 11:13:01.532620187 +0000 UTC m=+847.405654368" observedRunningTime="2026-01-29 11:13:10.874905976 +0000 UTC m=+856.747940167" watchObservedRunningTime="2026-01-29 11:13:10.958241694 +0000 UTC m=+856.831275885" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.988221 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-7hmqc" podStartSLOduration=14.59799531 podStartE2EDuration="36.988191779s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:35.895864104 +0000 UTC m=+821.768898295" lastFinishedPulling="2026-01-29 11:12:58.286060573 +0000 UTC m=+844.159094764" observedRunningTime="2026-01-29 11:13:10.983856503 +0000 UTC m=+856.856890704" watchObservedRunningTime="2026-01-29 11:13:10.988191779 +0000 UTC m=+856.861225970" Jan 29 11:13:11 crc kubenswrapper[4593]: I0129 11:13:11.022104 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ltfr4" podStartSLOduration=4.593918393 podStartE2EDuration="37.02208864s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.886728639 +0000 UTC m=+822.759762830" lastFinishedPulling="2026-01-29 11:13:09.314898866 +0000 UTC m=+855.187933077" observedRunningTime="2026-01-29 11:13:11.019074679 +0000 UTC m=+856.892108890" watchObservedRunningTime="2026-01-29 11:13:11.02208864 +0000 UTC m=+856.895122831" Jan 29 11:13:11 crc kubenswrapper[4593]: I0129 11:13:11.154445 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" podStartSLOduration=36.154424284 podStartE2EDuration="36.154424284s" podCreationTimestamp="2026-01-29 11:12:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:13:11.149997905 +0000 UTC m=+857.023032096" watchObservedRunningTime="2026-01-29 11:13:11.154424284 +0000 UTC m=+857.027458475" Jan 29 11:13:11 crc kubenswrapper[4593]: I0129 11:13:11.212294 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2ml7m" podStartSLOduration=4.319242477 podStartE2EDuration="37.212274478s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.421864945 +0000 UTC m=+822.294899136" lastFinishedPulling="2026-01-29 11:13:09.314896936 +0000 UTC m=+855.187931137" observedRunningTime="2026-01-29 11:13:11.194324536 +0000 UTC m=+857.067358737" watchObservedRunningTime="2026-01-29 11:13:11.212274478 +0000 UTC m=+857.085308659" Jan 29 11:13:11 crc kubenswrapper[4593]: I0129 11:13:11.221523 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-qt87l" podStartSLOduration=8.418551775 podStartE2EDuration="37.221493295s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.775804753 +0000 UTC m=+822.648838944" lastFinishedPulling="2026-01-29 11:13:05.578746273 +0000 UTC m=+851.451780464" observedRunningTime="2026-01-29 11:13:11.219752768 +0000 UTC m=+857.092786959" watchObservedRunningTime="2026-01-29 11:13:11.221493295 +0000 UTC m=+857.094527506" Jan 29 11:13:11 crc kubenswrapper[4593]: I0129 11:13:11.263256 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7ns7q" podStartSLOduration=14.828606506 podStartE2EDuration="37.263237446s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:35.852223733 +0000 UTC m=+821.725257924" lastFinishedPulling="2026-01-29 11:12:58.286854673 +0000 UTC m=+844.159888864" observedRunningTime="2026-01-29 11:13:11.261315185 +0000 UTC m=+857.134349376" watchObservedRunningTime="2026-01-29 11:13:11.263237446 +0000 UTC m=+857.136271637" Jan 29 11:13:11 crc kubenswrapper[4593]: I0129 11:13:11.856617 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-t584q" event={"ID":"812ebcfb-766d-4a1b-aaaa-2dd5a96ce047","Type":"ContainerStarted","Data":"66670e17430983198f3bd51333458e98de5166755abe9118d08fca861d9f73b7"} Jan 29 11:13:11 crc kubenswrapper[4593]: I0129 11:13:11.937613 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-885pn" podStartSLOduration=5.393157804 podStartE2EDuration="37.9375896s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.862266415 +0000 UTC m=+822.735300606" lastFinishedPulling="2026-01-29 11:13:09.406698191 +0000 UTC m=+855.279732402" observedRunningTime="2026-01-29 11:13:11.356299457 +0000 UTC m=+857.229333668" watchObservedRunningTime="2026-01-29 11:13:11.9375896 +0000 UTC m=+857.810623791" Jan 29 11:13:11 crc kubenswrapper[4593]: I0129 11:13:11.941557 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-t584q" podStartSLOduration=3.79699271 podStartE2EDuration="37.941543056s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.411301511 +0000 UTC m=+822.284335702" lastFinishedPulling="2026-01-29 11:13:10.555851857 +0000 UTC m=+856.428886048" observedRunningTime="2026-01-29 11:13:11.936959082 +0000 UTC m=+857.809993273" watchObservedRunningTime="2026-01-29 11:13:11.941543056 +0000 UTC m=+857.814577247" Jan 29 11:13:13 crc kubenswrapper[4593]: I0129 11:13:13.890173 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kttv8" event={"ID":"2c7ec826-43f0-49f3-9d96-4330427e4ed9","Type":"ContainerStarted","Data":"72c71f46f45bc9200a61f2fe96a5e57792c486fd79edd6edbac1fac91ec38878"} Jan 29 11:13:13 crc kubenswrapper[4593]: I0129 11:13:13.890960 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kttv8" Jan 29 11:13:13 crc kubenswrapper[4593]: I0129 11:13:13.899606 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-k4b7q" event={"ID":"0e86fa54-1e41-4bb9-86c7-a0ea0d919270","Type":"ContainerStarted","Data":"a7d6ee1831a5c14518a71cc9f80893decec79f51ac3109b12c6a77aa6c923b6e"} Jan 29 11:13:13 crc kubenswrapper[4593]: I0129 11:13:13.899815 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-k4b7q" Jan 29 11:13:13 crc kubenswrapper[4593]: I0129 11:13:13.916135 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kttv8" podStartSLOduration=4.237500918 podStartE2EDuration="39.916117162s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.879840651 +0000 UTC m=+822.752874842" lastFinishedPulling="2026-01-29 11:13:12.558456895 +0000 UTC m=+858.431491086" observedRunningTime="2026-01-29 11:13:13.914714114 +0000 UTC m=+859.787748305" watchObservedRunningTime="2026-01-29 11:13:13.916117162 +0000 UTC m=+859.789151353" Jan 29 11:13:13 crc kubenswrapper[4593]: I0129 11:13:13.930889 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-k4b7q" podStartSLOduration=3.163524962 podStartE2EDuration="39.930876078s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.780519325 +0000 UTC m=+822.653553516" lastFinishedPulling="2026-01-29 11:13:13.547870441 +0000 UTC m=+859.420904632" observedRunningTime="2026-01-29 11:13:13.928434873 +0000 UTC m=+859.801469064" watchObservedRunningTime="2026-01-29 11:13:13.930876078 +0000 UTC m=+859.803910269" Jan 29 11:13:14 crc kubenswrapper[4593]: I0129 11:13:14.622535 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-7hmqc" Jan 29 11:13:14 crc kubenswrapper[4593]: I0129 11:13:14.640999 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7ns7q" Jan 29 11:13:14 crc kubenswrapper[4593]: I0129 11:13:14.697973 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-xw2pz" Jan 29 11:13:14 crc kubenswrapper[4593]: I0129 11:13:14.891025 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-98l2v" Jan 29 11:13:15 crc kubenswrapper[4593]: I0129 11:13:15.063650 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-t584q" Jan 29 11:13:15 crc kubenswrapper[4593]: I0129 11:13:15.067687 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-t584q" Jan 29 11:13:15 crc kubenswrapper[4593]: I0129 11:13:15.187287 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c89cq" Jan 29 11:13:15 crc kubenswrapper[4593]: I0129 11:13:15.260287 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-qt87l" Jan 29 11:13:15 crc kubenswrapper[4593]: I0129 11:13:15.456282 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-885pn" Jan 29 11:13:15 crc kubenswrapper[4593]: I0129 11:13:15.638973 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-8kf6p" Jan 29 11:13:15 crc kubenswrapper[4593]: I0129 11:13:15.701088 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ltfr4" Jan 29 11:13:15 crc kubenswrapper[4593]: I0129 11:13:15.743240 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-z4mp8" Jan 29 11:13:16 crc kubenswrapper[4593]: I0129 11:13:16.929173 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xf5fn" event={"ID":"cdb96936-cd34-44fd-94b5-5570688fdfe6","Type":"ContainerStarted","Data":"19de6d55484fcb2fd18981d647ca6de6a0f6695bc25dac585e66cef31e3a2d98"} Jan 29 11:13:16 crc kubenswrapper[4593]: I0129 11:13:16.929582 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xf5fn" Jan 29 11:13:16 crc kubenswrapper[4593]: I0129 11:13:16.931301 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" event={"ID":"f6e2fc57-0cce-4f5a-bf3e-63efbfff1073","Type":"ContainerStarted","Data":"bf62fe720cc32b4683be192add558948198dd806971fc03e3e3a34ed038e5ee7"} Jan 29 11:13:16 crc kubenswrapper[4593]: I0129 11:13:16.931442 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" Jan 29 11:13:16 crc kubenswrapper[4593]: I0129 11:13:16.932497 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" event={"ID":"c2cda883-37e6-4c21-b320-4962ffdc98b3","Type":"ContainerStarted","Data":"2103598935a9d72d9150d67bbadf9ad2c574b7c2f0779f0d44481950669ede18"} Jan 29 11:13:16 crc kubenswrapper[4593]: I0129 11:13:16.932605 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" Jan 29 11:13:16 crc kubenswrapper[4593]: I0129 11:13:16.944406 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xf5fn" podStartSLOduration=2.834155317 podStartE2EDuration="42.94438826s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.467905229 +0000 UTC m=+822.340939420" lastFinishedPulling="2026-01-29 11:13:16.578138172 +0000 UTC m=+862.451172363" observedRunningTime="2026-01-29 11:13:16.942764246 +0000 UTC m=+862.815798447" watchObservedRunningTime="2026-01-29 11:13:16.94438826 +0000 UTC m=+862.817422451" Jan 29 11:13:16 crc kubenswrapper[4593]: I0129 11:13:16.975226 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" podStartSLOduration=36.381714779 podStartE2EDuration="42.975201787s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:13:09.978122029 +0000 UTC m=+855.851156220" lastFinishedPulling="2026-01-29 11:13:16.571609027 +0000 UTC m=+862.444643228" observedRunningTime="2026-01-29 11:13:16.970900492 +0000 UTC m=+862.843934693" watchObservedRunningTime="2026-01-29 11:13:16.975201787 +0000 UTC m=+862.848235978" Jan 29 11:13:16 crc kubenswrapper[4593]: I0129 11:13:16.997206 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" podStartSLOduration=36.159620785 podStartE2EDuration="42.997182958s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:13:09.739465539 +0000 UTC m=+855.612499730" lastFinishedPulling="2026-01-29 11:13:16.577027712 +0000 UTC m=+862.450061903" observedRunningTime="2026-01-29 11:13:16.99132105 +0000 UTC m=+862.864355251" watchObservedRunningTime="2026-01-29 11:13:16.997182958 +0000 UTC m=+862.870217149" Jan 29 11:13:17 crc kubenswrapper[4593]: I0129 11:13:17.559114 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:13:18 crc kubenswrapper[4593]: I0129 11:13:18.945432 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-zx6r8" event={"ID":"62efedcb-a194-4692-8e84-a0da7777a400","Type":"ContainerStarted","Data":"6a7a3b4edc11f928639449a1f7d706a8d8c95e7f9b476367bd5168246fc8526e"} Jan 29 11:13:18 crc kubenswrapper[4593]: I0129 11:13:18.946669 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-zx6r8" Jan 29 11:13:18 crc kubenswrapper[4593]: I0129 11:13:18.967526 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-zx6r8" podStartSLOduration=3.305937832 podStartE2EDuration="44.967503839s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.861117895 +0000 UTC m=+822.734152086" lastFinishedPulling="2026-01-29 11:13:18.522683882 +0000 UTC m=+864.395718093" observedRunningTime="2026-01-29 11:13:18.963249195 +0000 UTC m=+864.836283386" watchObservedRunningTime="2026-01-29 11:13:18.967503839 +0000 UTC m=+864.840538020" Jan 29 11:13:19 crc kubenswrapper[4593]: E0129 11:13:19.078085 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-zmssx" podUID="0259a320-8da9-48e5-8d73-25b09774d9c1" Jan 29 11:13:20 crc kubenswrapper[4593]: E0129 11:13:20.077450 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tfkk2" podUID="2f32633b-0490-4885-9543-a140c807c742" Jan 29 11:13:20 crc kubenswrapper[4593]: E0129 11:13:20.077585 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds" podUID="ba6fb45a-59ff-42ee-acb0-0ee43d001e1e" Jan 29 11:13:24 crc kubenswrapper[4593]: I0129 11:13:24.716937 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2ml7m" Jan 29 11:13:24 crc kubenswrapper[4593]: I0129 11:13:24.757565 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-xqcrc" Jan 29 11:13:25 crc kubenswrapper[4593]: I0129 11:13:25.148103 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xf5fn" Jan 29 11:13:25 crc kubenswrapper[4593]: I0129 11:13:25.259305 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-zx6r8" Jan 29 11:13:25 crc kubenswrapper[4593]: I0129 11:13:25.481571 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kttv8" Jan 29 11:13:25 crc kubenswrapper[4593]: I0129 11:13:25.825185 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-k4b7q" Jan 29 11:13:26 crc kubenswrapper[4593]: I0129 11:13:26.754536 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" Jan 29 11:13:27 crc kubenswrapper[4593]: I0129 11:13:27.239371 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" Jan 29 11:13:32 crc kubenswrapper[4593]: I0129 11:13:32.076298 4593 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 11:13:33 crc kubenswrapper[4593]: I0129 11:13:33.041485 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-zmssx" event={"ID":"0259a320-8da9-48e5-8d73-25b09774d9c1","Type":"ContainerStarted","Data":"40e1fde520d3392e4c75be969974c783b32b945e8bc13323204eaa9722384e5e"} Jan 29 11:13:33 crc kubenswrapper[4593]: I0129 11:13:33.042022 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-zmssx" Jan 29 11:13:33 crc kubenswrapper[4593]: I0129 11:13:33.066369 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-zmssx" podStartSLOduration=2.429511901 podStartE2EDuration="58.066349527s" podCreationTimestamp="2026-01-29 11:12:35 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.90487249 +0000 UTC m=+822.777906681" lastFinishedPulling="2026-01-29 11:13:32.541710116 +0000 UTC m=+878.414744307" observedRunningTime="2026-01-29 11:13:33.057809638 +0000 UTC m=+878.930843829" watchObservedRunningTime="2026-01-29 11:13:33.066349527 +0000 UTC m=+878.939383718" Jan 29 11:13:36 crc kubenswrapper[4593]: I0129 11:13:36.061423 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tfkk2" event={"ID":"2f32633b-0490-4885-9543-a140c807c742","Type":"ContainerStarted","Data":"cb9a81743cd483803fa0d10904e0bfe6026c9c670e8a251a6150438a487d91de"} Jan 29 11:13:36 crc kubenswrapper[4593]: I0129 11:13:36.063534 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds" event={"ID":"ba6fb45a-59ff-42ee-acb0-0ee43d001e1e","Type":"ContainerStarted","Data":"274529be6a5c28dc3c29f2a5e2ea7263a379e80db25fab52d7a0f10d147c8dd4"} Jan 29 11:13:36 crc kubenswrapper[4593]: I0129 11:13:36.064086 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds" Jan 29 11:13:36 crc kubenswrapper[4593]: I0129 11:13:36.112160 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tfkk2" podStartSLOduration=2.380402156 podStartE2EDuration="1m1.112143997s" podCreationTimestamp="2026-01-29 11:12:35 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.912244881 +0000 UTC m=+822.785279072" lastFinishedPulling="2026-01-29 11:13:35.643986722 +0000 UTC m=+881.517020913" observedRunningTime="2026-01-29 11:13:36.086346394 +0000 UTC m=+881.959380585" watchObservedRunningTime="2026-01-29 11:13:36.112143997 +0000 UTC m=+881.985178178" Jan 29 11:13:36 crc kubenswrapper[4593]: I0129 11:13:36.114984 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds" podStartSLOduration=3.377289888 podStartE2EDuration="1m2.114976324s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.907361215 +0000 UTC m=+822.780395406" lastFinishedPulling="2026-01-29 11:13:35.645047651 +0000 UTC m=+881.518081842" observedRunningTime="2026-01-29 11:13:36.108761806 +0000 UTC m=+881.981796017" watchObservedRunningTime="2026-01-29 11:13:36.114976324 +0000 UTC m=+881.988010515" Jan 29 11:13:45 crc kubenswrapper[4593]: I0129 11:13:45.631468 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds" Jan 29 11:13:46 crc kubenswrapper[4593]: I0129 11:13:46.022856 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-zmssx" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.330823 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-t52gk"] Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.332993 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-t52gk" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.337613 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-fhqs4" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.337895 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.337969 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.345147 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.354875 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-t52gk"] Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.365351 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w69lr\" (UniqueName: \"kubernetes.io/projected/3616718a-e7ca-4045-941b-4109f08f4989-kube-api-access-w69lr\") pod \"dnsmasq-dns-675f4bcbfc-t52gk\" (UID: \"3616718a-e7ca-4045-941b-4109f08f4989\") " pod="openstack/dnsmasq-dns-675f4bcbfc-t52gk" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.365410 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3616718a-e7ca-4045-941b-4109f08f4989-config\") pod \"dnsmasq-dns-675f4bcbfc-t52gk\" (UID: \"3616718a-e7ca-4045-941b-4109f08f4989\") " pod="openstack/dnsmasq-dns-675f4bcbfc-t52gk" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.446742 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-swvvt"] Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.452690 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-swvvt" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.456129 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.468586 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-swvvt"] Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.469204 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w69lr\" (UniqueName: \"kubernetes.io/projected/3616718a-e7ca-4045-941b-4109f08f4989-kube-api-access-w69lr\") pod \"dnsmasq-dns-675f4bcbfc-t52gk\" (UID: \"3616718a-e7ca-4045-941b-4109f08f4989\") " pod="openstack/dnsmasq-dns-675f4bcbfc-t52gk" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.469255 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3616718a-e7ca-4045-941b-4109f08f4989-config\") pod \"dnsmasq-dns-675f4bcbfc-t52gk\" (UID: \"3616718a-e7ca-4045-941b-4109f08f4989\") " pod="openstack/dnsmasq-dns-675f4bcbfc-t52gk" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.470095 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3616718a-e7ca-4045-941b-4109f08f4989-config\") pod \"dnsmasq-dns-675f4bcbfc-t52gk\" (UID: \"3616718a-e7ca-4045-941b-4109f08f4989\") " pod="openstack/dnsmasq-dns-675f4bcbfc-t52gk" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.517363 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w69lr\" (UniqueName: \"kubernetes.io/projected/3616718a-e7ca-4045-941b-4109f08f4989-kube-api-access-w69lr\") pod \"dnsmasq-dns-675f4bcbfc-t52gk\" (UID: \"3616718a-e7ca-4045-941b-4109f08f4989\") " pod="openstack/dnsmasq-dns-675f4bcbfc-t52gk" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.570477 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b705d0db-8509-4a63-9f5a-87976d741ebc-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-swvvt\" (UID: \"b705d0db-8509-4a63-9f5a-87976d741ebc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-swvvt" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.570575 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc96n\" (UniqueName: \"kubernetes.io/projected/b705d0db-8509-4a63-9f5a-87976d741ebc-kube-api-access-rc96n\") pod \"dnsmasq-dns-78dd6ddcc-swvvt\" (UID: \"b705d0db-8509-4a63-9f5a-87976d741ebc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-swvvt" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.570845 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b705d0db-8509-4a63-9f5a-87976d741ebc-config\") pod \"dnsmasq-dns-78dd6ddcc-swvvt\" (UID: \"b705d0db-8509-4a63-9f5a-87976d741ebc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-swvvt" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.665095 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-t52gk" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.672181 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b705d0db-8509-4a63-9f5a-87976d741ebc-config\") pod \"dnsmasq-dns-78dd6ddcc-swvvt\" (UID: \"b705d0db-8509-4a63-9f5a-87976d741ebc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-swvvt" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.672409 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b705d0db-8509-4a63-9f5a-87976d741ebc-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-swvvt\" (UID: \"b705d0db-8509-4a63-9f5a-87976d741ebc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-swvvt" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.672969 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc96n\" (UniqueName: \"kubernetes.io/projected/b705d0db-8509-4a63-9f5a-87976d741ebc-kube-api-access-rc96n\") pod \"dnsmasq-dns-78dd6ddcc-swvvt\" (UID: \"b705d0db-8509-4a63-9f5a-87976d741ebc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-swvvt" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.673248 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b705d0db-8509-4a63-9f5a-87976d741ebc-config\") pod \"dnsmasq-dns-78dd6ddcc-swvvt\" (UID: \"b705d0db-8509-4a63-9f5a-87976d741ebc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-swvvt" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.673348 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b705d0db-8509-4a63-9f5a-87976d741ebc-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-swvvt\" (UID: \"b705d0db-8509-4a63-9f5a-87976d741ebc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-swvvt" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.705472 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc96n\" (UniqueName: \"kubernetes.io/projected/b705d0db-8509-4a63-9f5a-87976d741ebc-kube-api-access-rc96n\") pod \"dnsmasq-dns-78dd6ddcc-swvvt\" (UID: \"b705d0db-8509-4a63-9f5a-87976d741ebc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-swvvt" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.773258 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-swvvt" Jan 29 11:14:02 crc kubenswrapper[4593]: I0129 11:14:02.038496 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-t52gk"] Jan 29 11:14:02 crc kubenswrapper[4593]: I0129 11:14:02.158263 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-swvvt"] Jan 29 11:14:02 crc kubenswrapper[4593]: W0129 11:14:02.164216 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb705d0db_8509_4a63_9f5a_87976d741ebc.slice/crio-c6b4f9ad5f9e175b3ecf71d1aa97e66d43ecb6c79e5698c17d617486827b1855 WatchSource:0}: Error finding container c6b4f9ad5f9e175b3ecf71d1aa97e66d43ecb6c79e5698c17d617486827b1855: Status 404 returned error can't find the container with id c6b4f9ad5f9e175b3ecf71d1aa97e66d43ecb6c79e5698c17d617486827b1855 Jan 29 11:14:02 crc kubenswrapper[4593]: I0129 11:14:02.445185 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-t52gk" event={"ID":"3616718a-e7ca-4045-941b-4109f08f4989","Type":"ContainerStarted","Data":"57892c814f48ce6859a27a763582b6a66ed12dadc0f9828ee1126b0622d692ee"} Jan 29 11:14:02 crc kubenswrapper[4593]: I0129 11:14:02.446567 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-swvvt" event={"ID":"b705d0db-8509-4a63-9f5a-87976d741ebc","Type":"ContainerStarted","Data":"c6b4f9ad5f9e175b3ecf71d1aa97e66d43ecb6c79e5698c17d617486827b1855"} Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.253679 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-t52gk"] Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.294255 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bvbjq"] Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.298165 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.301434 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bvbjq"] Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.424128 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e7df070-9e8b-4e24-ac24-4593ef89aca9-config\") pod \"dnsmasq-dns-666b6646f7-bvbjq\" (UID: \"7e7df070-9e8b-4e24-ac24-4593ef89aca9\") " pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.424176 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e7df070-9e8b-4e24-ac24-4593ef89aca9-dns-svc\") pod \"dnsmasq-dns-666b6646f7-bvbjq\" (UID: \"7e7df070-9e8b-4e24-ac24-4593ef89aca9\") " pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.424202 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pqcc\" (UniqueName: \"kubernetes.io/projected/7e7df070-9e8b-4e24-ac24-4593ef89aca9-kube-api-access-8pqcc\") pod \"dnsmasq-dns-666b6646f7-bvbjq\" (UID: \"7e7df070-9e8b-4e24-ac24-4593ef89aca9\") " pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.525943 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e7df070-9e8b-4e24-ac24-4593ef89aca9-config\") pod \"dnsmasq-dns-666b6646f7-bvbjq\" (UID: \"7e7df070-9e8b-4e24-ac24-4593ef89aca9\") " pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.525997 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e7df070-9e8b-4e24-ac24-4593ef89aca9-dns-svc\") pod \"dnsmasq-dns-666b6646f7-bvbjq\" (UID: \"7e7df070-9e8b-4e24-ac24-4593ef89aca9\") " pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.526022 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pqcc\" (UniqueName: \"kubernetes.io/projected/7e7df070-9e8b-4e24-ac24-4593ef89aca9-kube-api-access-8pqcc\") pod \"dnsmasq-dns-666b6646f7-bvbjq\" (UID: \"7e7df070-9e8b-4e24-ac24-4593ef89aca9\") " pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.526936 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e7df070-9e8b-4e24-ac24-4593ef89aca9-dns-svc\") pod \"dnsmasq-dns-666b6646f7-bvbjq\" (UID: \"7e7df070-9e8b-4e24-ac24-4593ef89aca9\") " pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.526944 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e7df070-9e8b-4e24-ac24-4593ef89aca9-config\") pod \"dnsmasq-dns-666b6646f7-bvbjq\" (UID: \"7e7df070-9e8b-4e24-ac24-4593ef89aca9\") " pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.553845 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pqcc\" (UniqueName: \"kubernetes.io/projected/7e7df070-9e8b-4e24-ac24-4593ef89aca9-kube-api-access-8pqcc\") pod \"dnsmasq-dns-666b6646f7-bvbjq\" (UID: \"7e7df070-9e8b-4e24-ac24-4593ef89aca9\") " pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.631246 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.660319 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-swvvt"] Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.694019 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4mvwn"] Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.695119 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.752393 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4mvwn"] Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.839415 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr9cr\" (UniqueName: \"kubernetes.io/projected/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-kube-api-access-xr9cr\") pod \"dnsmasq-dns-57d769cc4f-4mvwn\" (UID: \"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.839800 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-config\") pod \"dnsmasq-dns-57d769cc4f-4mvwn\" (UID: \"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.839824 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-4mvwn\" (UID: \"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.940649 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-config\") pod \"dnsmasq-dns-57d769cc4f-4mvwn\" (UID: \"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.940696 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-4mvwn\" (UID: \"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.940744 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr9cr\" (UniqueName: \"kubernetes.io/projected/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-kube-api-access-xr9cr\") pod \"dnsmasq-dns-57d769cc4f-4mvwn\" (UID: \"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.941650 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-config\") pod \"dnsmasq-dns-57d769cc4f-4mvwn\" (UID: \"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.941871 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-4mvwn\" (UID: \"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.970336 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr9cr\" (UniqueName: \"kubernetes.io/projected/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-kube-api-access-xr9cr\") pod \"dnsmasq-dns-57d769cc4f-4mvwn\" (UID: \"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.080921 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.314612 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bvbjq"] Jan 29 11:14:05 crc kubenswrapper[4593]: W0129 11:14:05.339558 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e7df070_9e8b_4e24_ac24_4593ef89aca9.slice/crio-565dedef28a6391201b894212d9023a697aa75bba8630f014fc28b15721c946e WatchSource:0}: Error finding container 565dedef28a6391201b894212d9023a697aa75bba8630f014fc28b15721c946e: Status 404 returned error can't find the container with id 565dedef28a6391201b894212d9023a697aa75bba8630f014fc28b15721c946e Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.467918 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.470781 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.477054 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.477306 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.477478 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.477568 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ck876" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.477670 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.477700 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.481837 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.496464 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.510409 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" event={"ID":"7e7df070-9e8b-4e24-ac24-4593ef89aca9","Type":"ContainerStarted","Data":"565dedef28a6391201b894212d9023a697aa75bba8630f014fc28b15721c946e"} Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.551574 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.551619 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-config-data\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.551680 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gt4f\" (UniqueName: \"kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-kube-api-access-5gt4f\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.551697 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.551716 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.551738 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.551754 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.551786 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.551814 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.551843 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.551870 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.654440 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.654507 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.654543 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.654576 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-config-data\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.654607 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.654626 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gt4f\" (UniqueName: \"kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-kube-api-access-5gt4f\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.654676 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.654701 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.654719 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.654762 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.654800 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.654949 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.654995 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.655825 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.655897 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.656946 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.657174 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-config-data\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.662248 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.668175 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.668984 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.669846 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.678462 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gt4f\" (UniqueName: \"kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-kube-api-access-5gt4f\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.683971 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.814767 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.819044 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4mvwn"] Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.910785 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.912000 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.916749 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.916964 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.916982 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.917132 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-ztnqn" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.917204 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.917248 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.917380 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.927411 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.959959 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.960240 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.960323 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.960420 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.960501 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/db2ccd2b-429d-43e8-a674-fb5c2abb0754-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.960569 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/db2ccd2b-429d-43e8-a674-fb5c2abb0754-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.960710 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.960815 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.960936 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pmxq\" (UniqueName: \"kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-kube-api-access-6pmxq\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.961043 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.961131 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.065584 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.066017 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pmxq\" (UniqueName: \"kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-kube-api-access-6pmxq\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.066074 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.066103 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.066271 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.066321 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.066343 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.066400 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.066430 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/db2ccd2b-429d-43e8-a674-fb5c2abb0754-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.066453 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/db2ccd2b-429d-43e8-a674-fb5c2abb0754-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.066488 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.067698 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.067981 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.068589 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.069321 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.070345 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.075312 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.092168 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.096600 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/db2ccd2b-429d-43e8-a674-fb5c2abb0754-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.107908 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.135226 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/db2ccd2b-429d-43e8-a674-fb5c2abb0754-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.136458 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pmxq\" (UniqueName: \"kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-kube-api-access-6pmxq\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.139056 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.258556 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.553565 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.568310 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" event={"ID":"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b","Type":"ContainerStarted","Data":"007a02e651669e8d70d7d24081e75b51bae9e37c2bf6d5643b4ba609d3b0011b"} Jan 29 11:14:06 crc kubenswrapper[4593]: W0129 11:14:06.639425 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0f6d0a4_2543_4de8_a64e_f3ce4c2bb11e.slice/crio-5d7fdf36d82144d193388373adf2f7188be08e39ae09d760625349b240578090 WatchSource:0}: Error finding container 5d7fdf36d82144d193388373adf2f7188be08e39ae09d760625349b240578090: Status 404 returned error can't find the container with id 5d7fdf36d82144d193388373adf2f7188be08e39ae09d760625349b240578090 Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.920107 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.118650 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.154060 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.154219 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.159188 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-qjhkm" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.159498 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.160262 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.168893 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.181841 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.310050 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6674f537-f800-4b05-912c-b2671e504c17-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.310096 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6674f537-f800-4b05-912c-b2671e504c17-kolla-config\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.310157 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.310203 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6674f537-f800-4b05-912c-b2671e504c17-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.310225 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6674f537-f800-4b05-912c-b2671e504c17-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.310247 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6674f537-f800-4b05-912c-b2671e504c17-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.310268 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6674f537-f800-4b05-912c-b2671e504c17-config-data-default\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.310301 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjf25\" (UniqueName: \"kubernetes.io/projected/6674f537-f800-4b05-912c-b2671e504c17-kube-api-access-jjf25\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.411845 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6674f537-f800-4b05-912c-b2671e504c17-config-data-default\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.411908 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjf25\" (UniqueName: \"kubernetes.io/projected/6674f537-f800-4b05-912c-b2671e504c17-kube-api-access-jjf25\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.411953 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6674f537-f800-4b05-912c-b2671e504c17-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.411972 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6674f537-f800-4b05-912c-b2671e504c17-kolla-config\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.412008 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.412039 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6674f537-f800-4b05-912c-b2671e504c17-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.412060 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6674f537-f800-4b05-912c-b2671e504c17-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.412080 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6674f537-f800-4b05-912c-b2671e504c17-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.413823 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6674f537-f800-4b05-912c-b2671e504c17-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.414396 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6674f537-f800-4b05-912c-b2671e504c17-config-data-default\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.414643 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6674f537-f800-4b05-912c-b2671e504c17-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.415174 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.415277 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6674f537-f800-4b05-912c-b2671e504c17-kolla-config\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.442876 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6674f537-f800-4b05-912c-b2671e504c17-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.448336 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6674f537-f800-4b05-912c-b2671e504c17-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.474242 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjf25\" (UniqueName: \"kubernetes.io/projected/6674f537-f800-4b05-912c-b2671e504c17-kube-api-access-jjf25\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.488969 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.521232 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.589343 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"db2ccd2b-429d-43e8-a674-fb5c2abb0754","Type":"ContainerStarted","Data":"5a494b5365040c8bc0ddefc581e932c4375131be0145147547aba83d5a596b24"} Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.593972 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e","Type":"ContainerStarted","Data":"5d7fdf36d82144d193388373adf2f7188be08e39ae09d760625349b240578090"} Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.274395 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.276410 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.281728 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.281973 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-fdlz9" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.282226 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.282337 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.310676 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.441504 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c1755998-9149-49be-b10f-c4fe029728bc-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.441544 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ndsc\" (UniqueName: \"kubernetes.io/projected/c1755998-9149-49be-b10f-c4fe029728bc-kube-api-access-7ndsc\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.441580 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1755998-9149-49be-b10f-c4fe029728bc-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.441601 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1755998-9149-49be-b10f-c4fe029728bc-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.441643 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.441669 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c1755998-9149-49be-b10f-c4fe029728bc-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.441700 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c1755998-9149-49be-b10f-c4fe029728bc-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.441736 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1755998-9149-49be-b10f-c4fe029728bc-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.545392 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1755998-9149-49be-b10f-c4fe029728bc-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.545483 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c1755998-9149-49be-b10f-c4fe029728bc-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.545508 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ndsc\" (UniqueName: \"kubernetes.io/projected/c1755998-9149-49be-b10f-c4fe029728bc-kube-api-access-7ndsc\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.545541 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1755998-9149-49be-b10f-c4fe029728bc-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.545567 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1755998-9149-49be-b10f-c4fe029728bc-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.545598 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.545646 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c1755998-9149-49be-b10f-c4fe029728bc-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.545679 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c1755998-9149-49be-b10f-c4fe029728bc-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.547192 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c1755998-9149-49be-b10f-c4fe029728bc-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.547840 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1755998-9149-49be-b10f-c4fe029728bc-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.548130 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c1755998-9149-49be-b10f-c4fe029728bc-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.548312 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.555616 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1755998-9149-49be-b10f-c4fe029728bc-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.556609 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1755998-9149-49be-b10f-c4fe029728bc-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.579558 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ndsc\" (UniqueName: \"kubernetes.io/projected/c1755998-9149-49be-b10f-c4fe029728bc-kube-api-access-7ndsc\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.614604 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.615569 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.618882 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c1755998-9149-49be-b10f-c4fe029728bc-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.619460 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.619549 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-m6vm2" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.621204 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.623942 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.629020 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.655754 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.751899 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dc6f5a6c-3bf0-4f78-89f3-1e2683a37958-kolla-config\") pod \"memcached-0\" (UID: \"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958\") " pod="openstack/memcached-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.751964 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc6f5a6c-3bf0-4f78-89f3-1e2683a37958-config-data\") pod \"memcached-0\" (UID: \"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958\") " pod="openstack/memcached-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.752022 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6f5a6c-3bf0-4f78-89f3-1e2683a37958-memcached-tls-certs\") pod \"memcached-0\" (UID: \"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958\") " pod="openstack/memcached-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.752061 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr8wt\" (UniqueName: \"kubernetes.io/projected/dc6f5a6c-3bf0-4f78-89f3-1e2683a37958-kube-api-access-dr8wt\") pod \"memcached-0\" (UID: \"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958\") " pod="openstack/memcached-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.752081 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6f5a6c-3bf0-4f78-89f3-1e2683a37958-combined-ca-bundle\") pod \"memcached-0\" (UID: \"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958\") " pod="openstack/memcached-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.852963 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6f5a6c-3bf0-4f78-89f3-1e2683a37958-memcached-tls-certs\") pod \"memcached-0\" (UID: \"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958\") " pod="openstack/memcached-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.853043 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr8wt\" (UniqueName: \"kubernetes.io/projected/dc6f5a6c-3bf0-4f78-89f3-1e2683a37958-kube-api-access-dr8wt\") pod \"memcached-0\" (UID: \"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958\") " pod="openstack/memcached-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.853076 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6f5a6c-3bf0-4f78-89f3-1e2683a37958-combined-ca-bundle\") pod \"memcached-0\" (UID: \"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958\") " pod="openstack/memcached-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.853107 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dc6f5a6c-3bf0-4f78-89f3-1e2683a37958-kolla-config\") pod \"memcached-0\" (UID: \"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958\") " pod="openstack/memcached-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.853158 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc6f5a6c-3bf0-4f78-89f3-1e2683a37958-config-data\") pod \"memcached-0\" (UID: \"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958\") " pod="openstack/memcached-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.854141 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc6f5a6c-3bf0-4f78-89f3-1e2683a37958-config-data\") pod \"memcached-0\" (UID: \"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958\") " pod="openstack/memcached-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.865718 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dc6f5a6c-3bf0-4f78-89f3-1e2683a37958-kolla-config\") pod \"memcached-0\" (UID: \"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958\") " pod="openstack/memcached-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.871588 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6f5a6c-3bf0-4f78-89f3-1e2683a37958-combined-ca-bundle\") pod \"memcached-0\" (UID: \"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958\") " pod="openstack/memcached-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.872036 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6f5a6c-3bf0-4f78-89f3-1e2683a37958-memcached-tls-certs\") pod \"memcached-0\" (UID: \"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958\") " pod="openstack/memcached-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.882315 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr8wt\" (UniqueName: \"kubernetes.io/projected/dc6f5a6c-3bf0-4f78-89f3-1e2683a37958-kube-api-access-dr8wt\") pod \"memcached-0\" (UID: \"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958\") " pod="openstack/memcached-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.912502 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:09 crc kubenswrapper[4593]: I0129 11:14:09.002664 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 11:14:09 crc kubenswrapper[4593]: I0129 11:14:09.686981 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6674f537-f800-4b05-912c-b2671e504c17","Type":"ContainerStarted","Data":"ce5363c18f79bb9c1f08e89717105847da3abd6525a9cd16fe23e08aae5ac420"} Jan 29 11:14:09 crc kubenswrapper[4593]: I0129 11:14:09.732348 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 11:14:09 crc kubenswrapper[4593]: I0129 11:14:09.774363 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 11:14:09 crc kubenswrapper[4593]: I0129 11:14:09.941885 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5zjts"] Jan 29 11:14:09 crc kubenswrapper[4593]: I0129 11:14:09.943709 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:14:09 crc kubenswrapper[4593]: I0129 11:14:09.969301 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5zjts"] Jan 29 11:14:09 crc kubenswrapper[4593]: I0129 11:14:09.993478 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80b1ef7b-9dfd-4910-99a8-830a1735fb79-catalog-content\") pod \"community-operators-5zjts\" (UID: \"80b1ef7b-9dfd-4910-99a8-830a1735fb79\") " pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:14:09 crc kubenswrapper[4593]: I0129 11:14:09.993567 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njvk6\" (UniqueName: \"kubernetes.io/projected/80b1ef7b-9dfd-4910-99a8-830a1735fb79-kube-api-access-njvk6\") pod \"community-operators-5zjts\" (UID: \"80b1ef7b-9dfd-4910-99a8-830a1735fb79\") " pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:14:09 crc kubenswrapper[4593]: I0129 11:14:09.993617 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80b1ef7b-9dfd-4910-99a8-830a1735fb79-utilities\") pod \"community-operators-5zjts\" (UID: \"80b1ef7b-9dfd-4910-99a8-830a1735fb79\") " pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.095286 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80b1ef7b-9dfd-4910-99a8-830a1735fb79-catalog-content\") pod \"community-operators-5zjts\" (UID: \"80b1ef7b-9dfd-4910-99a8-830a1735fb79\") " pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.095343 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njvk6\" (UniqueName: \"kubernetes.io/projected/80b1ef7b-9dfd-4910-99a8-830a1735fb79-kube-api-access-njvk6\") pod \"community-operators-5zjts\" (UID: \"80b1ef7b-9dfd-4910-99a8-830a1735fb79\") " pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.095381 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80b1ef7b-9dfd-4910-99a8-830a1735fb79-utilities\") pod \"community-operators-5zjts\" (UID: \"80b1ef7b-9dfd-4910-99a8-830a1735fb79\") " pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.095926 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80b1ef7b-9dfd-4910-99a8-830a1735fb79-utilities\") pod \"community-operators-5zjts\" (UID: \"80b1ef7b-9dfd-4910-99a8-830a1735fb79\") " pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.096198 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80b1ef7b-9dfd-4910-99a8-830a1735fb79-catalog-content\") pod \"community-operators-5zjts\" (UID: \"80b1ef7b-9dfd-4910-99a8-830a1735fb79\") " pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.141730 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njvk6\" (UniqueName: \"kubernetes.io/projected/80b1ef7b-9dfd-4910-99a8-830a1735fb79-kube-api-access-njvk6\") pod \"community-operators-5zjts\" (UID: \"80b1ef7b-9dfd-4910-99a8-830a1735fb79\") " pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.310681 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.542360 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.543221 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.578691 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-h5q6w" Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.580774 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.607822 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsks2\" (UniqueName: \"kubernetes.io/projected/1512a75d-a403-420b-a9be-f931faf1273a-kube-api-access-fsks2\") pod \"kube-state-metrics-0\" (UID: \"1512a75d-a403-420b-a9be-f931faf1273a\") " pod="openstack/kube-state-metrics-0" Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.708764 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsks2\" (UniqueName: \"kubernetes.io/projected/1512a75d-a403-420b-a9be-f931faf1273a-kube-api-access-fsks2\") pod \"kube-state-metrics-0\" (UID: \"1512a75d-a403-420b-a9be-f931faf1273a\") " pod="openstack/kube-state-metrics-0" Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.719656 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c1755998-9149-49be-b10f-c4fe029728bc","Type":"ContainerStarted","Data":"1170cf8324ef1a48f8a2b560460beca35748d70260701349c0c3a1810b1b114d"} Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.732626 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958","Type":"ContainerStarted","Data":"a9d15fd64111c3152bb3aed188baeb95bb13f70e61a520ab6fb744a75ae37941"} Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.768563 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsks2\" (UniqueName: \"kubernetes.io/projected/1512a75d-a403-420b-a9be-f931faf1273a-kube-api-access-fsks2\") pod \"kube-state-metrics-0\" (UID: \"1512a75d-a403-420b-a9be-f931faf1273a\") " pod="openstack/kube-state-metrics-0" Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.905832 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 11:14:11 crc kubenswrapper[4593]: I0129 11:14:11.320678 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5zjts"] Jan 29 11:14:11 crc kubenswrapper[4593]: I0129 11:14:11.749803 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zjts" event={"ID":"80b1ef7b-9dfd-4910-99a8-830a1735fb79","Type":"ContainerStarted","Data":"ade31aca7ba29e2371128a860beb89fe80c8c2fbd7528ceac5d2035097f7e6ad"} Jan 29 11:14:11 crc kubenswrapper[4593]: I0129 11:14:11.837871 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 11:14:11 crc kubenswrapper[4593]: W0129 11:14:11.923764 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1512a75d_a403_420b_a9be_f931faf1273a.slice/crio-a9c985edeb4a844ebb330990ed11e56a44761422347a56b0c3bd545f3f8f0fc2 WatchSource:0}: Error finding container a9c985edeb4a844ebb330990ed11e56a44761422347a56b0c3bd545f3f8f0fc2: Status 404 returned error can't find the container with id a9c985edeb4a844ebb330990ed11e56a44761422347a56b0c3bd545f3f8f0fc2 Jan 29 11:14:12 crc kubenswrapper[4593]: I0129 11:14:12.771840 4593 generic.go:334] "Generic (PLEG): container finished" podID="80b1ef7b-9dfd-4910-99a8-830a1735fb79" containerID="88f786f78b398f505ec5a44af965fed646d1e70bc02feb0e5bb5b6e39bfa9351" exitCode=0 Jan 29 11:14:12 crc kubenswrapper[4593]: I0129 11:14:12.772224 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zjts" event={"ID":"80b1ef7b-9dfd-4910-99a8-830a1735fb79","Type":"ContainerDied","Data":"88f786f78b398f505ec5a44af965fed646d1e70bc02feb0e5bb5b6e39bfa9351"} Jan 29 11:14:12 crc kubenswrapper[4593]: I0129 11:14:12.783784 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1512a75d-a403-420b-a9be-f931faf1273a","Type":"ContainerStarted","Data":"a9c985edeb4a844ebb330990ed11e56a44761422347a56b0c3bd545f3f8f0fc2"} Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.132752 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.141270 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.151342 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.155538 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.155720 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.155838 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.155932 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.156023 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-j49bx" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.196756 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwdd4\" (UniqueName: \"kubernetes.io/projected/fd9a4c00-318d-4bd1-85cb-40971234c3cd-kube-api-access-vwdd4\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.196811 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.196834 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd9a4c00-318d-4bd1-85cb-40971234c3cd-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.196897 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fd9a4c00-318d-4bd1-85cb-40971234c3cd-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.196926 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd9a4c00-318d-4bd1-85cb-40971234c3cd-config\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.196979 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fd9a4c00-318d-4bd1-85cb-40971234c3cd-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.197021 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd9a4c00-318d-4bd1-85cb-40971234c3cd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.197066 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd9a4c00-318d-4bd1-85cb-40971234c3cd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.306487 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd9a4c00-318d-4bd1-85cb-40971234c3cd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.306554 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd9a4c00-318d-4bd1-85cb-40971234c3cd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.306587 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwdd4\" (UniqueName: \"kubernetes.io/projected/fd9a4c00-318d-4bd1-85cb-40971234c3cd-kube-api-access-vwdd4\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.306611 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.306651 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd9a4c00-318d-4bd1-85cb-40971234c3cd-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.306690 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fd9a4c00-318d-4bd1-85cb-40971234c3cd-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.306708 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd9a4c00-318d-4bd1-85cb-40971234c3cd-config\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.306739 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fd9a4c00-318d-4bd1-85cb-40971234c3cd-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.307394 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.308100 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fd9a4c00-318d-4bd1-85cb-40971234c3cd-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.311483 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fd9a4c00-318d-4bd1-85cb-40971234c3cd-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.313530 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd9a4c00-318d-4bd1-85cb-40971234c3cd-config\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.338200 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd9a4c00-318d-4bd1-85cb-40971234c3cd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.360713 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd9a4c00-318d-4bd1-85cb-40971234c3cd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.389597 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd9a4c00-318d-4bd1-85cb-40971234c3cd-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.392208 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.394013 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwdd4\" (UniqueName: \"kubernetes.io/projected/fd9a4c00-318d-4bd1-85cb-40971234c3cd-kube-api-access-vwdd4\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.514372 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.312693 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-cc9qq"] Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.314085 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.320262 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.320308 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-7bnzl" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.322117 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cc9qq"] Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.322321 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.423843 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-x49lj"] Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.425671 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.443046 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df5842a4-132b-4c71-a970-efe4f00a3827-combined-ca-bundle\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.443115 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/df5842a4-132b-4c71-a970-efe4f00a3827-var-run\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.443147 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/df5842a4-132b-4c71-a970-efe4f00a3827-var-run-ovn\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.443192 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/df5842a4-132b-4c71-a970-efe4f00a3827-ovn-controller-tls-certs\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.443232 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lj78\" (UniqueName: \"kubernetes.io/projected/df5842a4-132b-4c71-a970-efe4f00a3827-kube-api-access-2lj78\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.443268 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df5842a4-132b-4c71-a970-efe4f00a3827-scripts\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.443322 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/df5842a4-132b-4c71-a970-efe4f00a3827-var-log-ovn\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.456865 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-x49lj"] Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.544827 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/22811af4-f063-480b-81b2-6c09b6526fea-etc-ovs\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.544883 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/df5842a4-132b-4c71-a970-efe4f00a3827-ovn-controller-tls-certs\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.544926 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6b7g\" (UniqueName: \"kubernetes.io/projected/22811af4-f063-480b-81b2-6c09b6526fea-kube-api-access-k6b7g\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.545060 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lj78\" (UniqueName: \"kubernetes.io/projected/df5842a4-132b-4c71-a970-efe4f00a3827-kube-api-access-2lj78\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.545083 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/22811af4-f063-480b-81b2-6c09b6526fea-var-lib\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.548414 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df5842a4-132b-4c71-a970-efe4f00a3827-scripts\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.549026 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df5842a4-132b-4c71-a970-efe4f00a3827-scripts\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.549116 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/22811af4-f063-480b-81b2-6c09b6526fea-var-run\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.549175 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/22811af4-f063-480b-81b2-6c09b6526fea-var-log\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.549244 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/df5842a4-132b-4c71-a970-efe4f00a3827-var-log-ovn\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.549324 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df5842a4-132b-4c71-a970-efe4f00a3827-combined-ca-bundle\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.549373 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/df5842a4-132b-4c71-a970-efe4f00a3827-var-run\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.549409 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22811af4-f063-480b-81b2-6c09b6526fea-scripts\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.549455 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/df5842a4-132b-4c71-a970-efe4f00a3827-var-run-ovn\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.550197 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/df5842a4-132b-4c71-a970-efe4f00a3827-var-log-ovn\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.550313 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/df5842a4-132b-4c71-a970-efe4f00a3827-var-run\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.553853 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/df5842a4-132b-4c71-a970-efe4f00a3827-var-run-ovn\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.558899 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/df5842a4-132b-4c71-a970-efe4f00a3827-ovn-controller-tls-certs\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.559032 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df5842a4-132b-4c71-a970-efe4f00a3827-combined-ca-bundle\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.566804 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lj78\" (UniqueName: \"kubernetes.io/projected/df5842a4-132b-4c71-a970-efe4f00a3827-kube-api-access-2lj78\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.652047 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22811af4-f063-480b-81b2-6c09b6526fea-scripts\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.652401 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/22811af4-f063-480b-81b2-6c09b6526fea-etc-ovs\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.652431 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6b7g\" (UniqueName: \"kubernetes.io/projected/22811af4-f063-480b-81b2-6c09b6526fea-kube-api-access-k6b7g\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.652466 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/22811af4-f063-480b-81b2-6c09b6526fea-var-lib\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.652497 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/22811af4-f063-480b-81b2-6c09b6526fea-var-run\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.652517 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/22811af4-f063-480b-81b2-6c09b6526fea-var-log\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.652761 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/22811af4-f063-480b-81b2-6c09b6526fea-var-log\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.655088 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22811af4-f063-480b-81b2-6c09b6526fea-scripts\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.655273 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/22811af4-f063-480b-81b2-6c09b6526fea-etc-ovs\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.655744 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/22811af4-f063-480b-81b2-6c09b6526fea-var-lib\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.655870 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/22811af4-f063-480b-81b2-6c09b6526fea-var-run\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.671687 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6b7g\" (UniqueName: \"kubernetes.io/projected/22811af4-f063-480b-81b2-6c09b6526fea-kube-api-access-k6b7g\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.673136 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.753270 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.235009 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.239107 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.242948 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.244688 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.244873 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-5ddd6" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.245036 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.245254 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.327923 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.327986 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.328303 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.328349 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.328463 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.328493 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-config\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.328515 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.328541 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5rlg\" (UniqueName: \"kubernetes.io/projected/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-kube-api-access-l5rlg\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.429905 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.429960 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.430017 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.430043 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.430063 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-config\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.430086 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5rlg\" (UniqueName: \"kubernetes.io/projected/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-kube-api-access-l5rlg\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.430144 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.430168 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.431330 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.431719 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.446067 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-config\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.446502 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.447653 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.453448 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.454479 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.463318 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.487000 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5rlg\" (UniqueName: \"kubernetes.io/projected/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-kube-api-access-l5rlg\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.575220 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:38 crc kubenswrapper[4593]: E0129 11:14:38.552576 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 29 11:14:38 crc kubenswrapper[4593]: E0129 11:14:38.553494 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jjf25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(6674f537-f800-4b05-912c-b2671e504c17): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:14:38 crc kubenswrapper[4593]: E0129 11:14:38.554627 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="6674f537-f800-4b05-912c-b2671e504c17" Jan 29 11:14:39 crc kubenswrapper[4593]: E0129 11:14:39.124236 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="6674f537-f800-4b05-912c-b2671e504c17" Jan 29 11:14:39 crc kubenswrapper[4593]: E0129 11:14:39.342916 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 29 11:14:39 crc kubenswrapper[4593]: E0129 11:14:39.343165 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n565h689h686h97h565h58dh64h67bh647h5f4h97h555h684h574h657h7bh655h6fhcbh5cfhcfh546h7fh5c8h676h684hbbh568h54fhc7h5cbh574q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dr8wt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(dc6f5a6c-3bf0-4f78-89f3-1e2683a37958): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:14:39 crc kubenswrapper[4593]: E0129 11:14:39.344436 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="dc6f5a6c-3bf0-4f78-89f3-1e2683a37958" Jan 29 11:14:40 crc kubenswrapper[4593]: E0129 11:14:40.131922 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="dc6f5a6c-3bf0-4f78-89f3-1e2683a37958" Jan 29 11:14:40 crc kubenswrapper[4593]: E0129 11:14:40.784178 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 29 11:14:40 crc kubenswrapper[4593]: E0129 11:14:40.784675 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pqcc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-bvbjq_openstack(7e7df070-9e8b-4e24-ac24-4593ef89aca9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:14:40 crc kubenswrapper[4593]: E0129 11:14:40.785949 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" podUID="7e7df070-9e8b-4e24-ac24-4593ef89aca9" Jan 29 11:14:40 crc kubenswrapper[4593]: E0129 11:14:40.812469 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 29 11:14:40 crc kubenswrapper[4593]: E0129 11:14:40.815820 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xr9cr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-4mvwn_openstack(4f968f6f-3c5b-4e45-baf2-cf20ac696d9b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:14:40 crc kubenswrapper[4593]: E0129 11:14:40.817265 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" podUID="4f968f6f-3c5b-4e45-baf2-cf20ac696d9b" Jan 29 11:14:40 crc kubenswrapper[4593]: E0129 11:14:40.935359 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 29 11:14:40 crc kubenswrapper[4593]: E0129 11:14:40.935543 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w69lr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-t52gk_openstack(3616718a-e7ca-4045-941b-4109f08f4989): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:14:40 crc kubenswrapper[4593]: E0129 11:14:40.936743 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-t52gk" podUID="3616718a-e7ca-4045-941b-4109f08f4989" Jan 29 11:14:40 crc kubenswrapper[4593]: E0129 11:14:40.938820 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 29 11:14:40 crc kubenswrapper[4593]: E0129 11:14:40.939350 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rc96n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-swvvt_openstack(b705d0db-8509-4a63-9f5a-87976d741ebc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:14:40 crc kubenswrapper[4593]: E0129 11:14:40.940703 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-swvvt" podUID="b705d0db-8509-4a63-9f5a-87976d741ebc" Jan 29 11:14:41 crc kubenswrapper[4593]: E0129 11:14:41.141568 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" podUID="4f968f6f-3c5b-4e45-baf2-cf20ac696d9b" Jan 29 11:14:41 crc kubenswrapper[4593]: E0129 11:14:41.141559 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" podUID="7e7df070-9e8b-4e24-ac24-4593ef89aca9" Jan 29 11:14:41 crc kubenswrapper[4593]: I0129 11:14:41.473272 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cc9qq"] Jan 29 11:14:41 crc kubenswrapper[4593]: W0129 11:14:41.525831 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf5842a4_132b_4c71_a970_efe4f00a3827.slice/crio-0c9d039339f5d04afbc173d87115effc674ad126948f9242d14888fc390bafc0 WatchSource:0}: Error finding container 0c9d039339f5d04afbc173d87115effc674ad126948f9242d14888fc390bafc0: Status 404 returned error can't find the container with id 0c9d039339f5d04afbc173d87115effc674ad126948f9242d14888fc390bafc0 Jan 29 11:14:41 crc kubenswrapper[4593]: I0129 11:14:41.980268 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-t52gk" Jan 29 11:14:41 crc kubenswrapper[4593]: I0129 11:14:41.989285 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-swvvt" Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.030981 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.091244 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3616718a-e7ca-4045-941b-4109f08f4989-config\") pod \"3616718a-e7ca-4045-941b-4109f08f4989\" (UID: \"3616718a-e7ca-4045-941b-4109f08f4989\") " Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.091350 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b705d0db-8509-4a63-9f5a-87976d741ebc-dns-svc\") pod \"b705d0db-8509-4a63-9f5a-87976d741ebc\" (UID: \"b705d0db-8509-4a63-9f5a-87976d741ebc\") " Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.091443 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b705d0db-8509-4a63-9f5a-87976d741ebc-config\") pod \"b705d0db-8509-4a63-9f5a-87976d741ebc\" (UID: \"b705d0db-8509-4a63-9f5a-87976d741ebc\") " Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.091476 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc96n\" (UniqueName: \"kubernetes.io/projected/b705d0db-8509-4a63-9f5a-87976d741ebc-kube-api-access-rc96n\") pod \"b705d0db-8509-4a63-9f5a-87976d741ebc\" (UID: \"b705d0db-8509-4a63-9f5a-87976d741ebc\") " Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.091524 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w69lr\" (UniqueName: \"kubernetes.io/projected/3616718a-e7ca-4045-941b-4109f08f4989-kube-api-access-w69lr\") pod \"3616718a-e7ca-4045-941b-4109f08f4989\" (UID: \"3616718a-e7ca-4045-941b-4109f08f4989\") " Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.092274 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b705d0db-8509-4a63-9f5a-87976d741ebc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b705d0db-8509-4a63-9f5a-87976d741ebc" (UID: "b705d0db-8509-4a63-9f5a-87976d741ebc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.093324 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3616718a-e7ca-4045-941b-4109f08f4989-config" (OuterVolumeSpecName: "config") pod "3616718a-e7ca-4045-941b-4109f08f4989" (UID: "3616718a-e7ca-4045-941b-4109f08f4989"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.093944 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b705d0db-8509-4a63-9f5a-87976d741ebc-config" (OuterVolumeSpecName: "config") pod "b705d0db-8509-4a63-9f5a-87976d741ebc" (UID: "b705d0db-8509-4a63-9f5a-87976d741ebc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.099093 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b705d0db-8509-4a63-9f5a-87976d741ebc-kube-api-access-rc96n" (OuterVolumeSpecName: "kube-api-access-rc96n") pod "b705d0db-8509-4a63-9f5a-87976d741ebc" (UID: "b705d0db-8509-4a63-9f5a-87976d741ebc"). InnerVolumeSpecName "kube-api-access-rc96n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.100836 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3616718a-e7ca-4045-941b-4109f08f4989-kube-api-access-w69lr" (OuterVolumeSpecName: "kube-api-access-w69lr") pod "3616718a-e7ca-4045-941b-4109f08f4989" (UID: "3616718a-e7ca-4045-941b-4109f08f4989"). InnerVolumeSpecName "kube-api-access-w69lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.147965 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cc9qq" event={"ID":"df5842a4-132b-4c71-a970-efe4f00a3827","Type":"ContainerStarted","Data":"0c9d039339f5d04afbc173d87115effc674ad126948f9242d14888fc390bafc0"} Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.149575 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-swvvt" event={"ID":"b705d0db-8509-4a63-9f5a-87976d741ebc","Type":"ContainerDied","Data":"c6b4f9ad5f9e175b3ecf71d1aa97e66d43ecb6c79e5698c17d617486827b1855"} Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.149682 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-swvvt" Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.150888 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-t52gk" Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.150911 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-t52gk" event={"ID":"3616718a-e7ca-4045-941b-4109f08f4989","Type":"ContainerDied","Data":"57892c814f48ce6859a27a763582b6a66ed12dadc0f9828ee1126b0622d692ee"} Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.158725 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9","Type":"ContainerStarted","Data":"8f99ebe56fbf1f5e33ea94183a28c9a507bc72a80c370d988abc16f526b76566"} Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.203365 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3616718a-e7ca-4045-941b-4109f08f4989-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.203397 4593 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b705d0db-8509-4a63-9f5a-87976d741ebc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.203407 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b705d0db-8509-4a63-9f5a-87976d741ebc-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.203418 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rc96n\" (UniqueName: \"kubernetes.io/projected/b705d0db-8509-4a63-9f5a-87976d741ebc-kube-api-access-rc96n\") on node \"crc\" DevicePath \"\"" Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.203430 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w69lr\" (UniqueName: \"kubernetes.io/projected/3616718a-e7ca-4045-941b-4109f08f4989-kube-api-access-w69lr\") on node \"crc\" DevicePath \"\"" Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.225128 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-swvvt"] Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.232468 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-swvvt"] Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.250299 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-t52gk"] Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.256548 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-t52gk"] Jan 29 11:14:43 crc kubenswrapper[4593]: I0129 11:14:43.095040 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3616718a-e7ca-4045-941b-4109f08f4989" path="/var/lib/kubelet/pods/3616718a-e7ca-4045-941b-4109f08f4989/volumes" Jan 29 11:14:43 crc kubenswrapper[4593]: I0129 11:14:43.098494 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b705d0db-8509-4a63-9f5a-87976d741ebc" path="/var/lib/kubelet/pods/b705d0db-8509-4a63-9f5a-87976d741ebc/volumes" Jan 29 11:14:43 crc kubenswrapper[4593]: I0129 11:14:43.114331 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-x49lj"] Jan 29 11:14:43 crc kubenswrapper[4593]: I0129 11:14:43.184011 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zjts" event={"ID":"80b1ef7b-9dfd-4910-99a8-830a1735fb79","Type":"ContainerStarted","Data":"9bb1171a6467cebf0bf64e79b5500c99261d694fa11543b2d01d7b0ddcbaec96"} Jan 29 11:14:43 crc kubenswrapper[4593]: I0129 11:14:43.187111 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 11:14:43 crc kubenswrapper[4593]: E0129 11:14:43.372782 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 29 11:14:43 crc kubenswrapper[4593]: E0129 11:14:43.372827 4593 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 29 11:14:43 crc kubenswrapper[4593]: E0129 11:14:43.372944 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fsks2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(1512a75d-a403-420b-a9be-f931faf1273a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 11:14:43 crc kubenswrapper[4593]: E0129 11:14:43.374055 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="1512a75d-a403-420b-a9be-f931faf1273a" Jan 29 11:14:44 crc kubenswrapper[4593]: I0129 11:14:44.193838 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e","Type":"ContainerStarted","Data":"44978dbad6338f76a863bda910ccc44233b86b74e07d252f43136dd31d7cd624"} Jan 29 11:14:44 crc kubenswrapper[4593]: I0129 11:14:44.284807 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c1755998-9149-49be-b10f-c4fe029728bc","Type":"ContainerStarted","Data":"97aa67ebfa2393a610a45c308a8a4b80642d7f74a23d7c02feada231615c7809"} Jan 29 11:14:44 crc kubenswrapper[4593]: I0129 11:14:44.298618 4593 generic.go:334] "Generic (PLEG): container finished" podID="80b1ef7b-9dfd-4910-99a8-830a1735fb79" containerID="9bb1171a6467cebf0bf64e79b5500c99261d694fa11543b2d01d7b0ddcbaec96" exitCode=0 Jan 29 11:14:44 crc kubenswrapper[4593]: I0129 11:14:44.298717 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zjts" event={"ID":"80b1ef7b-9dfd-4910-99a8-830a1735fb79","Type":"ContainerDied","Data":"9bb1171a6467cebf0bf64e79b5500c99261d694fa11543b2d01d7b0ddcbaec96"} Jan 29 11:14:44 crc kubenswrapper[4593]: I0129 11:14:44.313304 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"fd9a4c00-318d-4bd1-85cb-40971234c3cd","Type":"ContainerStarted","Data":"10d04c87a12a3428710a9a6993e86d098b950d8e64c13eb6b4ff4ac35bdcab88"} Jan 29 11:14:44 crc kubenswrapper[4593]: I0129 11:14:44.318230 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-x49lj" event={"ID":"22811af4-f063-480b-81b2-6c09b6526fea","Type":"ContainerStarted","Data":"538f749b613307642b44350e64b6cb037231a6b310457aa5fea6c9ebf1ae7b87"} Jan 29 11:14:44 crc kubenswrapper[4593]: E0129 11:14:44.323978 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="1512a75d-a403-420b-a9be-f931faf1273a" Jan 29 11:14:45 crc kubenswrapper[4593]: I0129 11:14:45.328552 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"db2ccd2b-429d-43e8-a674-fb5c2abb0754","Type":"ContainerStarted","Data":"6d261168add925568a421f585a6004956179df4396d9af74a221541b8db2b16f"} Jan 29 11:14:47 crc kubenswrapper[4593]: I0129 11:14:47.852320 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hnrxg"] Jan 29 11:14:47 crc kubenswrapper[4593]: I0129 11:14:47.854814 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:14:47 crc kubenswrapper[4593]: I0129 11:14:47.935966 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnrxg"] Jan 29 11:14:47 crc kubenswrapper[4593]: I0129 11:14:47.994170 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-utilities\") pod \"redhat-marketplace-hnrxg\" (UID: \"ba99bea9-cf82-4eb7-8c7b-f171c534fc62\") " pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:14:47 crc kubenswrapper[4593]: I0129 11:14:47.994403 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgv62\" (UniqueName: \"kubernetes.io/projected/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-kube-api-access-jgv62\") pod \"redhat-marketplace-hnrxg\" (UID: \"ba99bea9-cf82-4eb7-8c7b-f171c534fc62\") " pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:14:47 crc kubenswrapper[4593]: I0129 11:14:47.994550 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-catalog-content\") pod \"redhat-marketplace-hnrxg\" (UID: \"ba99bea9-cf82-4eb7-8c7b-f171c534fc62\") " pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:14:48 crc kubenswrapper[4593]: I0129 11:14:48.096506 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-utilities\") pod \"redhat-marketplace-hnrxg\" (UID: \"ba99bea9-cf82-4eb7-8c7b-f171c534fc62\") " pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:14:48 crc kubenswrapper[4593]: I0129 11:14:48.096568 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgv62\" (UniqueName: \"kubernetes.io/projected/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-kube-api-access-jgv62\") pod \"redhat-marketplace-hnrxg\" (UID: \"ba99bea9-cf82-4eb7-8c7b-f171c534fc62\") " pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:14:48 crc kubenswrapper[4593]: I0129 11:14:48.096656 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-catalog-content\") pod \"redhat-marketplace-hnrxg\" (UID: \"ba99bea9-cf82-4eb7-8c7b-f171c534fc62\") " pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:14:48 crc kubenswrapper[4593]: I0129 11:14:48.097382 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-utilities\") pod \"redhat-marketplace-hnrxg\" (UID: \"ba99bea9-cf82-4eb7-8c7b-f171c534fc62\") " pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:14:48 crc kubenswrapper[4593]: I0129 11:14:48.097497 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-catalog-content\") pod \"redhat-marketplace-hnrxg\" (UID: \"ba99bea9-cf82-4eb7-8c7b-f171c534fc62\") " pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:14:48 crc kubenswrapper[4593]: I0129 11:14:48.120472 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgv62\" (UniqueName: \"kubernetes.io/projected/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-kube-api-access-jgv62\") pod \"redhat-marketplace-hnrxg\" (UID: \"ba99bea9-cf82-4eb7-8c7b-f171c534fc62\") " pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:14:48 crc kubenswrapper[4593]: I0129 11:14:48.289942 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:14:48 crc kubenswrapper[4593]: I0129 11:14:48.751018 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnrxg"] Jan 29 11:14:49 crc kubenswrapper[4593]: I0129 11:14:49.516528 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9","Type":"ContainerStarted","Data":"81570d092c1390e4d61bb8c50f70df099d79d6c5e0a359f15dc0834bd3f5d521"} Jan 29 11:14:49 crc kubenswrapper[4593]: I0129 11:14:49.521207 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zjts" event={"ID":"80b1ef7b-9dfd-4910-99a8-830a1735fb79","Type":"ContainerStarted","Data":"77efa027816de776464e0940fd5bce08b6a4290d0af1ab6b28b714dc35a913be"} Jan 29 11:14:49 crc kubenswrapper[4593]: I0129 11:14:49.526154 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"fd9a4c00-318d-4bd1-85cb-40971234c3cd","Type":"ContainerStarted","Data":"3e9d5e0cbc4c1824dbe8de6c8b250af90d4e69ec8502da730733af3378cd013c"} Jan 29 11:14:49 crc kubenswrapper[4593]: I0129 11:14:49.529049 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cc9qq" event={"ID":"df5842a4-132b-4c71-a970-efe4f00a3827","Type":"ContainerStarted","Data":"2cd0fa74c869ba6fc2b7b790ba76246c66b68dcb192a193bd1f6cb04700e2a57"} Jan 29 11:14:49 crc kubenswrapper[4593]: I0129 11:14:49.530936 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-x49lj" event={"ID":"22811af4-f063-480b-81b2-6c09b6526fea","Type":"ContainerStarted","Data":"3c35f96b9e6d360871a4363b31c9b97c03bf9c434960bc17aed93f232b0ef3da"} Jan 29 11:14:49 crc kubenswrapper[4593]: I0129 11:14:49.535659 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnrxg" event={"ID":"ba99bea9-cf82-4eb7-8c7b-f171c534fc62","Type":"ContainerStarted","Data":"d1f4402fb69794a1a6deb77fd346981fb6d8f2b3bd7eaaad3126ed929b264e54"} Jan 29 11:14:49 crc kubenswrapper[4593]: I0129 11:14:49.537761 4593 generic.go:334] "Generic (PLEG): container finished" podID="c1755998-9149-49be-b10f-c4fe029728bc" containerID="97aa67ebfa2393a610a45c308a8a4b80642d7f74a23d7c02feada231615c7809" exitCode=0 Jan 29 11:14:49 crc kubenswrapper[4593]: I0129 11:14:49.537830 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c1755998-9149-49be-b10f-c4fe029728bc","Type":"ContainerDied","Data":"97aa67ebfa2393a610a45c308a8a4b80642d7f74a23d7c02feada231615c7809"} Jan 29 11:14:49 crc kubenswrapper[4593]: I0129 11:14:49.563047 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5zjts" podStartSLOduration=6.126915528 podStartE2EDuration="40.563030752s" podCreationTimestamp="2026-01-29 11:14:09 +0000 UTC" firstStartedPulling="2026-01-29 11:14:12.782859606 +0000 UTC m=+918.655893797" lastFinishedPulling="2026-01-29 11:14:47.21897483 +0000 UTC m=+953.092009021" observedRunningTime="2026-01-29 11:14:49.556937899 +0000 UTC m=+955.429972120" watchObservedRunningTime="2026-01-29 11:14:49.563030752 +0000 UTC m=+955.436064943" Jan 29 11:14:50 crc kubenswrapper[4593]: I0129 11:14:50.311228 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:14:50 crc kubenswrapper[4593]: I0129 11:14:50.312274 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:14:50 crc kubenswrapper[4593]: I0129 11:14:50.555710 4593 generic.go:334] "Generic (PLEG): container finished" podID="22811af4-f063-480b-81b2-6c09b6526fea" containerID="3c35f96b9e6d360871a4363b31c9b97c03bf9c434960bc17aed93f232b0ef3da" exitCode=0 Jan 29 11:14:50 crc kubenswrapper[4593]: I0129 11:14:50.557906 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-x49lj" event={"ID":"22811af4-f063-480b-81b2-6c09b6526fea","Type":"ContainerDied","Data":"3c35f96b9e6d360871a4363b31c9b97c03bf9c434960bc17aed93f232b0ef3da"} Jan 29 11:14:50 crc kubenswrapper[4593]: I0129 11:14:50.562700 4593 generic.go:334] "Generic (PLEG): container finished" podID="ba99bea9-cf82-4eb7-8c7b-f171c534fc62" containerID="cd84694d15788663bcca8f1cea58b3f9c8ab044022df23a01ee0a17afa892276" exitCode=0 Jan 29 11:14:50 crc kubenswrapper[4593]: I0129 11:14:50.562774 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnrxg" event={"ID":"ba99bea9-cf82-4eb7-8c7b-f171c534fc62","Type":"ContainerDied","Data":"cd84694d15788663bcca8f1cea58b3f9c8ab044022df23a01ee0a17afa892276"} Jan 29 11:14:50 crc kubenswrapper[4593]: I0129 11:14:50.565296 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c1755998-9149-49be-b10f-c4fe029728bc","Type":"ContainerStarted","Data":"f10392e8ba068cb86aaf4c0479307405db5a114398a080dc4462c0cf885c71ba"} Jan 29 11:14:50 crc kubenswrapper[4593]: I0129 11:14:50.565611 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:50 crc kubenswrapper[4593]: I0129 11:14:50.609322 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-cc9qq" podStartSLOduration=29.949031603999998 podStartE2EDuration="35.609303772s" podCreationTimestamp="2026-01-29 11:14:15 +0000 UTC" firstStartedPulling="2026-01-29 11:14:41.529896643 +0000 UTC m=+947.402930834" lastFinishedPulling="2026-01-29 11:14:47.190168811 +0000 UTC m=+953.063203002" observedRunningTime="2026-01-29 11:14:50.602222203 +0000 UTC m=+956.475256404" watchObservedRunningTime="2026-01-29 11:14:50.609303772 +0000 UTC m=+956.482337963" Jan 29 11:14:50 crc kubenswrapper[4593]: I0129 11:14:50.659325 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=12.508228658 podStartE2EDuration="43.659307326s" podCreationTimestamp="2026-01-29 11:14:07 +0000 UTC" firstStartedPulling="2026-01-29 11:14:09.731440038 +0000 UTC m=+915.604474239" lastFinishedPulling="2026-01-29 11:14:40.882518716 +0000 UTC m=+946.755552907" observedRunningTime="2026-01-29 11:14:50.648549559 +0000 UTC m=+956.521583760" watchObservedRunningTime="2026-01-29 11:14:50.659307326 +0000 UTC m=+956.532341517" Jan 29 11:14:51 crc kubenswrapper[4593]: I0129 11:14:51.568379 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-5zjts" podUID="80b1ef7b-9dfd-4910-99a8-830a1735fb79" containerName="registry-server" probeResult="failure" output=< Jan 29 11:14:51 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:14:51 crc kubenswrapper[4593]: > Jan 29 11:14:51 crc kubenswrapper[4593]: I0129 11:14:51.587695 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6674f537-f800-4b05-912c-b2671e504c17","Type":"ContainerStarted","Data":"191dc09ec9f00c9db76f1bdf3e46d2d35456e3970488e371a323804fbf1f6993"} Jan 29 11:14:51 crc kubenswrapper[4593]: I0129 11:14:51.595185 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-x49lj" event={"ID":"22811af4-f063-480b-81b2-6c09b6526fea","Type":"ContainerStarted","Data":"21186a30326857d6527171cd31a7d953ddb9db6ca1df416000c061f34f0ee3d1"} Jan 29 11:14:52 crc kubenswrapper[4593]: I0129 11:14:52.608164 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-x49lj" event={"ID":"22811af4-f063-480b-81b2-6c09b6526fea","Type":"ContainerStarted","Data":"105ea44e3a1b6249121d9400cc3e0093a41d887065da0dd822b53606b0838287"} Jan 29 11:14:52 crc kubenswrapper[4593]: I0129 11:14:52.608939 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:52 crc kubenswrapper[4593]: I0129 11:14:52.608982 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:52 crc kubenswrapper[4593]: I0129 11:14:52.613030 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnrxg" event={"ID":"ba99bea9-cf82-4eb7-8c7b-f171c534fc62","Type":"ContainerStarted","Data":"af838fa010c8947df25073166fa4b7b48c902b1c9dfcc02609c3d4b2597c538c"} Jan 29 11:14:52 crc kubenswrapper[4593]: I0129 11:14:52.646621 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-x49lj" podStartSLOduration=33.831612102 podStartE2EDuration="37.646598518s" podCreationTimestamp="2026-01-29 11:14:15 +0000 UTC" firstStartedPulling="2026-01-29 11:14:43.375466973 +0000 UTC m=+949.248501164" lastFinishedPulling="2026-01-29 11:14:47.190453389 +0000 UTC m=+953.063487580" observedRunningTime="2026-01-29 11:14:52.643167807 +0000 UTC m=+958.516201998" watchObservedRunningTime="2026-01-29 11:14:52.646598518 +0000 UTC m=+958.519632709" Jan 29 11:14:53 crc kubenswrapper[4593]: I0129 11:14:53.624513 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"fd9a4c00-318d-4bd1-85cb-40971234c3cd","Type":"ContainerStarted","Data":"90f76511404af4bd114645242b92da7e485fc55b5702244b6b91afff28db1bce"} Jan 29 11:14:53 crc kubenswrapper[4593]: I0129 11:14:53.626459 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9","Type":"ContainerStarted","Data":"886a74852d3d5b1e67156d954d91d303e6f37a4bb0cba5783dd60c45e12a1ad0"} Jan 29 11:14:53 crc kubenswrapper[4593]: I0129 11:14:53.628258 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958","Type":"ContainerStarted","Data":"363fef13a5ff1e3a65bb60b6f2eaecb8b1c519fbcf12f35e57117039af0c67ab"} Jan 29 11:14:53 crc kubenswrapper[4593]: I0129 11:14:53.648113 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=31.768157499 podStartE2EDuration="40.648097995s" podCreationTimestamp="2026-01-29 11:14:13 +0000 UTC" firstStartedPulling="2026-01-29 11:14:43.388589443 +0000 UTC m=+949.261623634" lastFinishedPulling="2026-01-29 11:14:52.268529939 +0000 UTC m=+958.141564130" observedRunningTime="2026-01-29 11:14:53.647253182 +0000 UTC m=+959.520287373" watchObservedRunningTime="2026-01-29 11:14:53.648097995 +0000 UTC m=+959.521132186" Jan 29 11:14:53 crc kubenswrapper[4593]: I0129 11:14:53.678562 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=26.451656216 podStartE2EDuration="36.678524107s" podCreationTimestamp="2026-01-29 11:14:17 +0000 UTC" firstStartedPulling="2026-01-29 11:14:42.049524889 +0000 UTC m=+947.922559080" lastFinishedPulling="2026-01-29 11:14:52.27639278 +0000 UTC m=+958.149426971" observedRunningTime="2026-01-29 11:14:53.67453694 +0000 UTC m=+959.547571141" watchObservedRunningTime="2026-01-29 11:14:53.678524107 +0000 UTC m=+959.551558298" Jan 29 11:14:54 crc kubenswrapper[4593]: I0129 11:14:54.002817 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 29 11:14:54 crc kubenswrapper[4593]: I0129 11:14:54.094078 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.35297166 podStartE2EDuration="46.094054895s" podCreationTimestamp="2026-01-29 11:14:08 +0000 UTC" firstStartedPulling="2026-01-29 11:14:09.800904542 +0000 UTC m=+915.673938723" lastFinishedPulling="2026-01-29 11:14:52.541987767 +0000 UTC m=+958.415021958" observedRunningTime="2026-01-29 11:14:53.70600508 +0000 UTC m=+959.579039271" watchObservedRunningTime="2026-01-29 11:14:54.094054895 +0000 UTC m=+959.967089086" Jan 29 11:14:54 crc kubenswrapper[4593]: I0129 11:14:54.517006 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:54 crc kubenswrapper[4593]: I0129 11:14:54.576859 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:54 crc kubenswrapper[4593]: E0129 11:14:54.609267 4593 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.147:56878->38.102.83.147:45711: write tcp 38.102.83.147:56878->38.102.83.147:45711: write: broken pipe Jan 29 11:14:54 crc kubenswrapper[4593]: I0129 11:14:54.617490 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:54 crc kubenswrapper[4593]: I0129 11:14:54.782177 4593 generic.go:334] "Generic (PLEG): container finished" podID="ba99bea9-cf82-4eb7-8c7b-f171c534fc62" containerID="af838fa010c8947df25073166fa4b7b48c902b1c9dfcc02609c3d4b2597c538c" exitCode=0 Jan 29 11:14:54 crc kubenswrapper[4593]: I0129 11:14:54.783139 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnrxg" event={"ID":"ba99bea9-cf82-4eb7-8c7b-f171c534fc62","Type":"ContainerDied","Data":"af838fa010c8947df25073166fa4b7b48c902b1c9dfcc02609c3d4b2597c538c"} Jan 29 11:14:54 crc kubenswrapper[4593]: I0129 11:14:54.787510 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:54 crc kubenswrapper[4593]: I0129 11:14:54.856524 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.474358 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-g6lk4"] Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.476268 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.480330 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.489531 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-g6lk4"] Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.537951 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4mvwn"] Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.579751 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-lw6d5"] Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.581322 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.590265 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.605775 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-lw6d5"] Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.605800 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn72l\" (UniqueName: \"kubernetes.io/projected/9299d646-8191-4da6-a2d1-d5a8c6492e91-kube-api-access-zn72l\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.605881 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9299d646-8191-4da6-a2d1-d5a8c6492e91-ovn-rundir\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.605905 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9299d646-8191-4da6-a2d1-d5a8c6492e91-combined-ca-bundle\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.605960 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9299d646-8191-4da6-a2d1-d5a8c6492e91-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.605987 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9299d646-8191-4da6-a2d1-d5a8c6492e91-config\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.606031 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9299d646-8191-4da6-a2d1-d5a8c6492e91-ovs-rundir\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.707824 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9299d646-8191-4da6-a2d1-d5a8c6492e91-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.707905 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9299d646-8191-4da6-a2d1-d5a8c6492e91-config\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.707958 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-lw6d5\" (UID: \"9288612d-73d6-410c-b109-9d3124e96f9c\") " pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.707998 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj4vf\" (UniqueName: \"kubernetes.io/projected/9288612d-73d6-410c-b109-9d3124e96f9c-kube-api-access-xj4vf\") pod \"dnsmasq-dns-6bc7876d45-lw6d5\" (UID: \"9288612d-73d6-410c-b109-9d3124e96f9c\") " pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.708039 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9299d646-8191-4da6-a2d1-d5a8c6492e91-ovs-rundir\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.708083 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn72l\" (UniqueName: \"kubernetes.io/projected/9299d646-8191-4da6-a2d1-d5a8c6492e91-kube-api-access-zn72l\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.708120 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-config\") pod \"dnsmasq-dns-6bc7876d45-lw6d5\" (UID: \"9288612d-73d6-410c-b109-9d3124e96f9c\") " pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.708178 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-lw6d5\" (UID: \"9288612d-73d6-410c-b109-9d3124e96f9c\") " pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.708207 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9299d646-8191-4da6-a2d1-d5a8c6492e91-ovn-rundir\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.708238 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9299d646-8191-4da6-a2d1-d5a8c6492e91-combined-ca-bundle\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.708906 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9299d646-8191-4da6-a2d1-d5a8c6492e91-ovn-rundir\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.709509 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9299d646-8191-4da6-a2d1-d5a8c6492e91-config\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.710148 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9299d646-8191-4da6-a2d1-d5a8c6492e91-ovs-rundir\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.715329 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9299d646-8191-4da6-a2d1-d5a8c6492e91-combined-ca-bundle\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.716075 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9299d646-8191-4da6-a2d1-d5a8c6492e91-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.732121 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn72l\" (UniqueName: \"kubernetes.io/projected/9299d646-8191-4da6-a2d1-d5a8c6492e91-kube-api-access-zn72l\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.805667 4593 generic.go:334] "Generic (PLEG): container finished" podID="4f968f6f-3c5b-4e45-baf2-cf20ac696d9b" containerID="544b0e0df1d380946a3e8080c9c9fb0744ffc4f89a7dc3a91498dc76d46dd2a7" exitCode=0 Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.805766 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" event={"ID":"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b","Type":"ContainerDied","Data":"544b0e0df1d380946a3e8080c9c9fb0744ffc4f89a7dc3a91498dc76d46dd2a7"} Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.813892 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-lw6d5\" (UID: \"9288612d-73d6-410c-b109-9d3124e96f9c\") " pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.814016 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-lw6d5\" (UID: \"9288612d-73d6-410c-b109-9d3124e96f9c\") " pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.814052 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj4vf\" (UniqueName: \"kubernetes.io/projected/9288612d-73d6-410c-b109-9d3124e96f9c-kube-api-access-xj4vf\") pod \"dnsmasq-dns-6bc7876d45-lw6d5\" (UID: \"9288612d-73d6-410c-b109-9d3124e96f9c\") " pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.814116 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-config\") pod \"dnsmasq-dns-6bc7876d45-lw6d5\" (UID: \"9288612d-73d6-410c-b109-9d3124e96f9c\") " pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.815154 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-lw6d5\" (UID: \"9288612d-73d6-410c-b109-9d3124e96f9c\") " pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.815154 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-config\") pod \"dnsmasq-dns-6bc7876d45-lw6d5\" (UID: \"9288612d-73d6-410c-b109-9d3124e96f9c\") " pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.815892 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-lw6d5\" (UID: \"9288612d-73d6-410c-b109-9d3124e96f9c\") " pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.826774 4593 generic.go:334] "Generic (PLEG): container finished" podID="6674f537-f800-4b05-912c-b2671e504c17" containerID="191dc09ec9f00c9db76f1bdf3e46d2d35456e3970488e371a323804fbf1f6993" exitCode=0 Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.826895 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6674f537-f800-4b05-912c-b2671e504c17","Type":"ContainerDied","Data":"191dc09ec9f00c9db76f1bdf3e46d2d35456e3970488e371a323804fbf1f6993"} Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.840694 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj4vf\" (UniqueName: \"kubernetes.io/projected/9288612d-73d6-410c-b109-9d3124e96f9c-kube-api-access-xj4vf\") pod \"dnsmasq-dns-6bc7876d45-lw6d5\" (UID: \"9288612d-73d6-410c-b109-9d3124e96f9c\") " pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.847325 4593 generic.go:334] "Generic (PLEG): container finished" podID="7e7df070-9e8b-4e24-ac24-4593ef89aca9" containerID="da7810f16f10ab271866380a9652b5504d930f59d786d1df10f9e1a22d6586a4" exitCode=0 Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.847840 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" event={"ID":"7e7df070-9e8b-4e24-ac24-4593ef89aca9","Type":"ContainerDied","Data":"da7810f16f10ab271866380a9652b5504d930f59d786d1df10f9e1a22d6586a4"} Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.871758 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.112718 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.226826 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bvbjq"] Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.268230 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-cgm9z"] Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.271190 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.283988 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.297142 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-cgm9z"] Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.405205 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99nm4\" (UniqueName: \"kubernetes.io/projected/ba134367-9e72-466a-8aa3-0bda1deb7791-kube-api-access-99nm4\") pod \"dnsmasq-dns-8554648995-cgm9z\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.405765 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-config\") pod \"dnsmasq-dns-8554648995-cgm9z\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.405820 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-cgm9z\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.405847 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-dns-svc\") pod \"dnsmasq-dns-8554648995-cgm9z\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.405945 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-cgm9z\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.512247 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99nm4\" (UniqueName: \"kubernetes.io/projected/ba134367-9e72-466a-8aa3-0bda1deb7791-kube-api-access-99nm4\") pod \"dnsmasq-dns-8554648995-cgm9z\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.512307 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-config\") pod \"dnsmasq-dns-8554648995-cgm9z\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.512327 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-cgm9z\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.512344 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-dns-svc\") pod \"dnsmasq-dns-8554648995-cgm9z\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.512400 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-cgm9z\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.513500 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-cgm9z\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.513721 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-dns-svc\") pod \"dnsmasq-dns-8554648995-cgm9z\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.513893 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-cgm9z\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.518129 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.521015 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-config\") pod \"dnsmasq-dns-8554648995-cgm9z\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.546970 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99nm4\" (UniqueName: \"kubernetes.io/projected/ba134367-9e72-466a-8aa3-0bda1deb7791-kube-api-access-99nm4\") pod \"dnsmasq-dns-8554648995-cgm9z\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.139605 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.179912 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnrxg" event={"ID":"ba99bea9-cf82-4eb7-8c7b-f171c534fc62","Type":"ContainerStarted","Data":"4895474b2f5eeb052b2d990d58ef03a99f4466ec22ffd294eacac21fca622134"} Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.232108 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hnrxg" podStartSLOduration=5.264674498 podStartE2EDuration="10.232090397s" podCreationTimestamp="2026-01-29 11:14:47 +0000 UTC" firstStartedPulling="2026-01-29 11:14:50.565887163 +0000 UTC m=+956.438921354" lastFinishedPulling="2026-01-29 11:14:55.533303062 +0000 UTC m=+961.406337253" observedRunningTime="2026-01-29 11:14:57.220165448 +0000 UTC m=+963.093199639" watchObservedRunningTime="2026-01-29 11:14:57.232090397 +0000 UTC m=+963.105124588" Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.328989 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.370703 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.458147 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xr9cr\" (UniqueName: \"kubernetes.io/projected/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-kube-api-access-xr9cr\") pod \"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b\" (UID: \"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b\") " Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.458389 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-dns-svc\") pod \"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b\" (UID: \"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b\") " Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.458446 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-config\") pod \"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b\" (UID: \"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b\") " Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.466922 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-kube-api-access-xr9cr" (OuterVolumeSpecName: "kube-api-access-xr9cr") pod "4f968f6f-3c5b-4e45-baf2-cf20ac696d9b" (UID: "4f968f6f-3c5b-4e45-baf2-cf20ac696d9b"). InnerVolumeSpecName "kube-api-access-xr9cr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.524121 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-lw6d5"] Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.540154 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4f968f6f-3c5b-4e45-baf2-cf20ac696d9b" (UID: "4f968f6f-3c5b-4e45-baf2-cf20ac696d9b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.547232 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-config" (OuterVolumeSpecName: "config") pod "4f968f6f-3c5b-4e45-baf2-cf20ac696d9b" (UID: "4f968f6f-3c5b-4e45-baf2-cf20ac696d9b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:14:57 crc kubenswrapper[4593]: W0129 11:14:57.553268 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9288612d_73d6_410c_b109_9d3124e96f9c.slice/crio-55fb6adab579ff40463d7f5f9cf1505c1fa8ef85800ff903e67f7aacf830b70d WatchSource:0}: Error finding container 55fb6adab579ff40463d7f5f9cf1505c1fa8ef85800ff903e67f7aacf830b70d: Status 404 returned error can't find the container with id 55fb6adab579ff40463d7f5f9cf1505c1fa8ef85800ff903e67f7aacf830b70d Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.560499 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xr9cr\" (UniqueName: \"kubernetes.io/projected/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-kube-api-access-xr9cr\") on node \"crc\" DevicePath \"\"" Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.560954 4593 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.560970 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:14:57 crc kubenswrapper[4593]: W0129 11:14:57.574097 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9299d646_8191_4da6_a2d1_d5a8c6492e91.slice/crio-e521fb641da25c817d3aafbd3daac480e597cf3fc2cab17e3df92fecd539f3c3 WatchSource:0}: Error finding container e521fb641da25c817d3aafbd3daac480e597cf3fc2cab17e3df92fecd539f3c3: Status 404 returned error can't find the container with id e521fb641da25c817d3aafbd3daac480e597cf3fc2cab17e3df92fecd539f3c3 Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.575373 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-g6lk4"] Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.908104 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-cgm9z"] Jan 29 11:14:57 crc kubenswrapper[4593]: W0129 11:14:57.917448 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba134367_9e72_466a_8aa3_0bda1deb7791.slice/crio-03a28ce5a42adf28e21bd51fb0ee9216c7ab5bdb7d9e843e28d1f210295085a6 WatchSource:0}: Error finding container 03a28ce5a42adf28e21bd51fb0ee9216c7ab5bdb7d9e843e28d1f210295085a6: Status 404 returned error can't find the container with id 03a28ce5a42adf28e21bd51fb0ee9216c7ab5bdb7d9e843e28d1f210295085a6 Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.188161 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" event={"ID":"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b","Type":"ContainerDied","Data":"007a02e651669e8d70d7d24081e75b51bae9e37c2bf6d5643b4ba609d3b0011b"} Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.188218 4593 scope.go:117] "RemoveContainer" containerID="544b0e0df1d380946a3e8080c9c9fb0744ffc4f89a7dc3a91498dc76d46dd2a7" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.188175 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.193354 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6674f537-f800-4b05-912c-b2671e504c17","Type":"ContainerStarted","Data":"632fdf977b3a3ad2d924089de4c26155a1b12bab23fab4b4d2a285a437c1b589"} Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.196201 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" event={"ID":"9288612d-73d6-410c-b109-9d3124e96f9c","Type":"ContainerStarted","Data":"56f4c64d6413cc5bc4edfcf3047aa5b45a567cb527bc710b266d604cfb388597"} Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.196254 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" event={"ID":"9288612d-73d6-410c-b109-9d3124e96f9c","Type":"ContainerStarted","Data":"55fb6adab579ff40463d7f5f9cf1505c1fa8ef85800ff903e67f7aacf830b70d"} Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.198542 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1512a75d-a403-420b-a9be-f931faf1273a","Type":"ContainerStarted","Data":"86bc440cb31e485f009e115ffa955e35cb29cedb22292b6665d6526a008cafe4"} Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.198826 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.200365 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" event={"ID":"7e7df070-9e8b-4e24-ac24-4593ef89aca9","Type":"ContainerStarted","Data":"4f21c2eef273f8566ecba7a486c08323d148beb0c5639f76f2c2c3529cc80795"} Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.200462 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" podUID="7e7df070-9e8b-4e24-ac24-4593ef89aca9" containerName="dnsmasq-dns" containerID="cri-o://4f21c2eef273f8566ecba7a486c08323d148beb0c5639f76f2c2c3529cc80795" gracePeriod=10 Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.200533 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.206404 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-g6lk4" event={"ID":"9299d646-8191-4da6-a2d1-d5a8c6492e91","Type":"ContainerStarted","Data":"e521fb641da25c817d3aafbd3daac480e597cf3fc2cab17e3df92fecd539f3c3"} Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.210400 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-cgm9z" event={"ID":"ba134367-9e72-466a-8aa3-0bda1deb7791","Type":"ContainerStarted","Data":"03a28ce5a42adf28e21bd51fb0ee9216c7ab5bdb7d9e843e28d1f210295085a6"} Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.245562 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371984.609236 podStartE2EDuration="52.24553957s" podCreationTimestamp="2026-01-29 11:14:06 +0000 UTC" firstStartedPulling="2026-01-29 11:14:08.685937797 +0000 UTC m=+914.558971988" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:14:58.231583948 +0000 UTC m=+964.104618139" watchObservedRunningTime="2026-01-29 11:14:58.24553957 +0000 UTC m=+964.118573761" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.263500 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" podStartSLOduration=4.754770744 podStartE2EDuration="54.263484049s" podCreationTimestamp="2026-01-29 11:14:04 +0000 UTC" firstStartedPulling="2026-01-29 11:14:05.344318534 +0000 UTC m=+911.217352725" lastFinishedPulling="2026-01-29 11:14:54.853031839 +0000 UTC m=+960.726066030" observedRunningTime="2026-01-29 11:14:58.253069532 +0000 UTC m=+964.126103723" watchObservedRunningTime="2026-01-29 11:14:58.263484049 +0000 UTC m=+964.136518240" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.270003 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=4.220986754 podStartE2EDuration="48.269985473s" podCreationTimestamp="2026-01-29 11:14:10 +0000 UTC" firstStartedPulling="2026-01-29 11:14:11.933272335 +0000 UTC m=+917.806306526" lastFinishedPulling="2026-01-29 11:14:55.982271054 +0000 UTC m=+961.855305245" observedRunningTime="2026-01-29 11:14:58.269149581 +0000 UTC m=+964.142183772" watchObservedRunningTime="2026-01-29 11:14:58.269985473 +0000 UTC m=+964.143019664" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.290512 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.291552 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.299709 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.321845 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4mvwn"] Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.326055 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4mvwn"] Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.518488 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 29 11:14:58 crc kubenswrapper[4593]: E0129 11:14:58.518974 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f968f6f-3c5b-4e45-baf2-cf20ac696d9b" containerName="init" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.518988 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f968f6f-3c5b-4e45-baf2-cf20ac696d9b" containerName="init" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.519211 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f968f6f-3c5b-4e45-baf2-cf20ac696d9b" containerName="init" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.520159 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.524967 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.525287 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.649102 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.649358 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-4nb56" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.710932 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.756701 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5320cc21-470d-450c-afa0-c5926e3243c6-config\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.756896 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5320cc21-470d-450c-afa0-c5926e3243c6-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.756951 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5320cc21-470d-450c-afa0-c5926e3243c6-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.756981 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5320cc21-470d-450c-afa0-c5926e3243c6-scripts\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.757029 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5320cc21-470d-450c-afa0-c5926e3243c6-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.757099 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k58x\" (UniqueName: \"kubernetes.io/projected/5320cc21-470d-450c-afa0-c5926e3243c6-kube-api-access-5k58x\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.757159 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5320cc21-470d-450c-afa0-c5926e3243c6-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.858087 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5320cc21-470d-450c-afa0-c5926e3243c6-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.858172 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5320cc21-470d-450c-afa0-c5926e3243c6-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.858204 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5320cc21-470d-450c-afa0-c5926e3243c6-scripts\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.858248 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5320cc21-470d-450c-afa0-c5926e3243c6-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.858307 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k58x\" (UniqueName: \"kubernetes.io/projected/5320cc21-470d-450c-afa0-c5926e3243c6-kube-api-access-5k58x\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.858351 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5320cc21-470d-450c-afa0-c5926e3243c6-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.858385 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5320cc21-470d-450c-afa0-c5926e3243c6-config\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.859143 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5320cc21-470d-450c-afa0-c5926e3243c6-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.860112 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5320cc21-470d-450c-afa0-c5926e3243c6-scripts\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.863236 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5320cc21-470d-450c-afa0-c5926e3243c6-config\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.864521 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5320cc21-470d-450c-afa0-c5926e3243c6-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.866515 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5320cc21-470d-450c-afa0-c5926e3243c6-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.867267 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5320cc21-470d-450c-afa0-c5926e3243c6-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.888855 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k58x\" (UniqueName: \"kubernetes.io/projected/5320cc21-470d-450c-afa0-c5926e3243c6-kube-api-access-5k58x\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.916034 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.916396 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:59 crc kubenswrapper[4593]: I0129 11:14:59.018791 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 29 11:14:59 crc kubenswrapper[4593]: I0129 11:14:59.027250 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 11:14:59 crc kubenswrapper[4593]: I0129 11:14:59.132132 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f968f6f-3c5b-4e45-baf2-cf20ac696d9b" path="/var/lib/kubelet/pods/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b/volumes" Jan 29 11:14:59 crc kubenswrapper[4593]: I0129 11:14:59.233744 4593 generic.go:334] "Generic (PLEG): container finished" podID="9288612d-73d6-410c-b109-9d3124e96f9c" containerID="56f4c64d6413cc5bc4edfcf3047aa5b45a567cb527bc710b266d604cfb388597" exitCode=0 Jan 29 11:14:59 crc kubenswrapper[4593]: I0129 11:14:59.234106 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" event={"ID":"9288612d-73d6-410c-b109-9d3124e96f9c","Type":"ContainerDied","Data":"56f4c64d6413cc5bc4edfcf3047aa5b45a567cb527bc710b266d604cfb388597"} Jan 29 11:14:59 crc kubenswrapper[4593]: I0129 11:14:59.238258 4593 generic.go:334] "Generic (PLEG): container finished" podID="7e7df070-9e8b-4e24-ac24-4593ef89aca9" containerID="4f21c2eef273f8566ecba7a486c08323d148beb0c5639f76f2c2c3529cc80795" exitCode=0 Jan 29 11:14:59 crc kubenswrapper[4593]: I0129 11:14:59.238596 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" event={"ID":"7e7df070-9e8b-4e24-ac24-4593ef89aca9","Type":"ContainerDied","Data":"4f21c2eef273f8566ecba7a486c08323d148beb0c5639f76f2c2c3529cc80795"} Jan 29 11:14:59 crc kubenswrapper[4593]: I0129 11:14:59.382302 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-hnrxg" podUID="ba99bea9-cf82-4eb7-8c7b-f171c534fc62" containerName="registry-server" probeResult="failure" output=< Jan 29 11:14:59 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:14:59 crc kubenswrapper[4593]: > Jan 29 11:14:59 crc kubenswrapper[4593]: I0129 11:14:59.720384 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 11:14:59 crc kubenswrapper[4593]: W0129 11:14:59.742510 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5320cc21_470d_450c_afa0_c5926e3243c6.slice/crio-dda9112777bee58c842e5e0f470559789a6d1545c7d7ee715e8c3a8ebdf8afb5 WatchSource:0}: Error finding container dda9112777bee58c842e5e0f470559789a6d1545c7d7ee715e8c3a8ebdf8afb5: Status 404 returned error can't find the container with id dda9112777bee58c842e5e0f470559789a6d1545c7d7ee715e8c3a8ebdf8afb5 Jan 29 11:14:59 crc kubenswrapper[4593]: I0129 11:14:59.934486 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.069893 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e7df070-9e8b-4e24-ac24-4593ef89aca9-dns-svc\") pod \"7e7df070-9e8b-4e24-ac24-4593ef89aca9\" (UID: \"7e7df070-9e8b-4e24-ac24-4593ef89aca9\") " Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.069986 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pqcc\" (UniqueName: \"kubernetes.io/projected/7e7df070-9e8b-4e24-ac24-4593ef89aca9-kube-api-access-8pqcc\") pod \"7e7df070-9e8b-4e24-ac24-4593ef89aca9\" (UID: \"7e7df070-9e8b-4e24-ac24-4593ef89aca9\") " Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.070136 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e7df070-9e8b-4e24-ac24-4593ef89aca9-config\") pod \"7e7df070-9e8b-4e24-ac24-4593ef89aca9\" (UID: \"7e7df070-9e8b-4e24-ac24-4593ef89aca9\") " Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.092411 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e7df070-9e8b-4e24-ac24-4593ef89aca9-kube-api-access-8pqcc" (OuterVolumeSpecName: "kube-api-access-8pqcc") pod "7e7df070-9e8b-4e24-ac24-4593ef89aca9" (UID: "7e7df070-9e8b-4e24-ac24-4593ef89aca9"). InnerVolumeSpecName "kube-api-access-8pqcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.139508 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e7df070-9e8b-4e24-ac24-4593ef89aca9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7e7df070-9e8b-4e24-ac24-4593ef89aca9" (UID: "7e7df070-9e8b-4e24-ac24-4593ef89aca9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.164052 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e7df070-9e8b-4e24-ac24-4593ef89aca9-config" (OuterVolumeSpecName: "config") pod "7e7df070-9e8b-4e24-ac24-4593ef89aca9" (UID: "7e7df070-9e8b-4e24-ac24-4593ef89aca9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.173066 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e7df070-9e8b-4e24-ac24-4593ef89aca9-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.173095 4593 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e7df070-9e8b-4e24-ac24-4593ef89aca9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.173106 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pqcc\" (UniqueName: \"kubernetes.io/projected/7e7df070-9e8b-4e24-ac24-4593ef89aca9-kube-api-access-8pqcc\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.178184 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8"] Jan 29 11:15:00 crc kubenswrapper[4593]: E0129 11:15:00.178587 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e7df070-9e8b-4e24-ac24-4593ef89aca9" containerName="dnsmasq-dns" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.178610 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e7df070-9e8b-4e24-ac24-4593ef89aca9" containerName="dnsmasq-dns" Jan 29 11:15:00 crc kubenswrapper[4593]: E0129 11:15:00.178685 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e7df070-9e8b-4e24-ac24-4593ef89aca9" containerName="init" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.178697 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e7df070-9e8b-4e24-ac24-4593ef89aca9" containerName="init" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.178909 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e7df070-9e8b-4e24-ac24-4593ef89aca9" containerName="dnsmasq-dns" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.179589 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.182396 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.182604 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.183714 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.197884 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8"] Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.268377 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" event={"ID":"7e7df070-9e8b-4e24-ac24-4593ef89aca9","Type":"ContainerDied","Data":"565dedef28a6391201b894212d9023a697aa75bba8630f014fc28b15721c946e"} Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.270029 4593 scope.go:117] "RemoveContainer" containerID="4f21c2eef273f8566ecba7a486c08323d148beb0c5639f76f2c2c3529cc80795" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.270312 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.282735 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4g5j\" (UniqueName: \"kubernetes.io/projected/8d624d92-85b0-48dc-94f4-047ac84aaa0c-kube-api-access-j4g5j\") pod \"collect-profiles-29494755-htvh8\" (UID: \"8d624d92-85b0-48dc-94f4-047ac84aaa0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.284390 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d624d92-85b0-48dc-94f4-047ac84aaa0c-secret-volume\") pod \"collect-profiles-29494755-htvh8\" (UID: \"8d624d92-85b0-48dc-94f4-047ac84aaa0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.285010 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d624d92-85b0-48dc-94f4-047ac84aaa0c-config-volume\") pod \"collect-profiles-29494755-htvh8\" (UID: \"8d624d92-85b0-48dc-94f4-047ac84aaa0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.286190 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-g6lk4" event={"ID":"9299d646-8191-4da6-a2d1-d5a8c6492e91","Type":"ContainerStarted","Data":"f65f29a02a36886ce3d7e342d32921b0f906594c830d8f38f18fb6431ad3619e"} Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.290598 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5320cc21-470d-450c-afa0-c5926e3243c6","Type":"ContainerStarted","Data":"dda9112777bee58c842e5e0f470559789a6d1545c7d7ee715e8c3a8ebdf8afb5"} Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.295619 4593 generic.go:334] "Generic (PLEG): container finished" podID="ba134367-9e72-466a-8aa3-0bda1deb7791" containerID="42e3e46a82a979e0d389f47be7049e973bc55893fd804a529a847013351b7e9c" exitCode=0 Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.295701 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-cgm9z" event={"ID":"ba134367-9e72-466a-8aa3-0bda1deb7791","Type":"ContainerDied","Data":"42e3e46a82a979e0d389f47be7049e973bc55893fd804a529a847013351b7e9c"} Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.309581 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" event={"ID":"9288612d-73d6-410c-b109-9d3124e96f9c","Type":"ContainerStarted","Data":"2ef61e3b91c1c3e6e252646d712ea2fdfcde408704d5e98a8540b0b3553ebe92"} Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.335644 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-g6lk4" podStartSLOduration=5.335582595 podStartE2EDuration="5.335582595s" podCreationTimestamp="2026-01-29 11:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:15:00.327117419 +0000 UTC m=+966.200151610" watchObservedRunningTime="2026-01-29 11:15:00.335582595 +0000 UTC m=+966.208616786" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.341463 4593 scope.go:117] "RemoveContainer" containerID="da7810f16f10ab271866380a9652b5504d930f59d786d1df10f9e1a22d6586a4" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.387772 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4g5j\" (UniqueName: \"kubernetes.io/projected/8d624d92-85b0-48dc-94f4-047ac84aaa0c-kube-api-access-j4g5j\") pod \"collect-profiles-29494755-htvh8\" (UID: \"8d624d92-85b0-48dc-94f4-047ac84aaa0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.387859 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d624d92-85b0-48dc-94f4-047ac84aaa0c-secret-volume\") pod \"collect-profiles-29494755-htvh8\" (UID: \"8d624d92-85b0-48dc-94f4-047ac84aaa0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.387940 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d624d92-85b0-48dc-94f4-047ac84aaa0c-config-volume\") pod \"collect-profiles-29494755-htvh8\" (UID: \"8d624d92-85b0-48dc-94f4-047ac84aaa0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.389147 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d624d92-85b0-48dc-94f4-047ac84aaa0c-config-volume\") pod \"collect-profiles-29494755-htvh8\" (UID: \"8d624d92-85b0-48dc-94f4-047ac84aaa0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.415059 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d624d92-85b0-48dc-94f4-047ac84aaa0c-secret-volume\") pod \"collect-profiles-29494755-htvh8\" (UID: \"8d624d92-85b0-48dc-94f4-047ac84aaa0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.464831 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.464897 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bvbjq"] Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.471294 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bvbjq"] Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.476135 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" podStartSLOduration=5.476114055 podStartE2EDuration="5.476114055s" podCreationTimestamp="2026-01-29 11:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:15:00.464591228 +0000 UTC m=+966.337625449" watchObservedRunningTime="2026-01-29 11:15:00.476114055 +0000 UTC m=+966.349148236" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.494057 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4g5j\" (UniqueName: \"kubernetes.io/projected/8d624d92-85b0-48dc-94f4-047ac84aaa0c-kube-api-access-j4g5j\") pod \"collect-profiles-29494755-htvh8\" (UID: \"8d624d92-85b0-48dc-94f4-047ac84aaa0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.556586 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.559528 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.609818 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.750927 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5zjts"] Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.039606 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-lw6d5"] Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.094870 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e7df070-9e8b-4e24-ac24-4593ef89aca9" path="/var/lib/kubelet/pods/7e7df070-9e8b-4e24-ac24-4593ef89aca9/volumes" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.121884 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.140927 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lm2dg"] Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.142469 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.177491 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lm2dg"] Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.359910 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-lm2dg\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.359994 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-config\") pod \"dnsmasq-dns-b8fbc5445-lm2dg\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.360018 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-lm2dg\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.360043 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-lm2dg\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.360074 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2649\" (UniqueName: \"kubernetes.io/projected/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-kube-api-access-h2649\") pod \"dnsmasq-dns-b8fbc5445-lm2dg\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.376015 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-cgm9z" event={"ID":"ba134367-9e72-466a-8aa3-0bda1deb7791","Type":"ContainerStarted","Data":"16c330099663087d1ad14f43dde6f6b5da97e137920d113a4cc68d120af8d43a"} Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.376286 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.398054 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-cgm9z" podStartSLOduration=5.398033327 podStartE2EDuration="5.398033327s" podCreationTimestamp="2026-01-29 11:14:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:15:01.396207729 +0000 UTC m=+967.269241920" watchObservedRunningTime="2026-01-29 11:15:01.398033327 +0000 UTC m=+967.271067518" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.461616 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-config\") pod \"dnsmasq-dns-b8fbc5445-lm2dg\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.461733 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-lm2dg\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.461768 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-lm2dg\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.461804 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2649\" (UniqueName: \"kubernetes.io/projected/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-kube-api-access-h2649\") pod \"dnsmasq-dns-b8fbc5445-lm2dg\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.461864 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-lm2dg\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.462585 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-config\") pod \"dnsmasq-dns-b8fbc5445-lm2dg\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.462799 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-lm2dg\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.462811 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-lm2dg\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.463668 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-lm2dg\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.494450 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2649\" (UniqueName: \"kubernetes.io/projected/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-kube-api-access-h2649\") pod \"dnsmasq-dns-b8fbc5445-lm2dg\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.509076 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.570241 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8"] Jan 29 11:15:02 crc kubenswrapper[4593]: W0129 11:15:02.003400 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d624d92_85b0_48dc_94f4_047ac84aaa0c.slice/crio-e1ffc8f638f234f1e4b2a1ef92c4d24c5debc912008dd0a9b438d90833fbf3dc WatchSource:0}: Error finding container e1ffc8f638f234f1e4b2a1ef92c4d24c5debc912008dd0a9b438d90833fbf3dc: Status 404 returned error can't find the container with id e1ffc8f638f234f1e4b2a1ef92c4d24c5debc912008dd0a9b438d90833fbf3dc Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.124511 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.129515 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.135620 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-mpxfb" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.135872 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.136012 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.137431 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.178507 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.278674 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/307ad072-fdfc-4c55-8891-bc041d755b1a-lock\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.278715 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.278736 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/307ad072-fdfc-4c55-8891-bc041d755b1a-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.278811 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/307ad072-fdfc-4c55-8891-bc041d755b1a-cache\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.278859 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.278880 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4pwv\" (UniqueName: \"kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-kube-api-access-k4pwv\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: E0129 11:15:02.320407 4593 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.147:54944->38.102.83.147:45711: write tcp 38.102.83.147:54944->38.102.83.147:45711: write: broken pipe Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.381909 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/307ad072-fdfc-4c55-8891-bc041d755b1a-lock\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.381966 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.381987 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/307ad072-fdfc-4c55-8891-bc041d755b1a-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.382051 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/307ad072-fdfc-4c55-8891-bc041d755b1a-cache\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.382130 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.382151 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4pwv\" (UniqueName: \"kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-kube-api-access-k4pwv\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.383129 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/307ad072-fdfc-4c55-8891-bc041d755b1a-lock\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: E0129 11:15:02.383210 4593 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 11:15:02 crc kubenswrapper[4593]: E0129 11:15:02.383222 4593 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 11:15:02 crc kubenswrapper[4593]: E0129 11:15:02.383257 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift podName:307ad072-fdfc-4c55-8891-bc041d755b1a nodeName:}" failed. No retries permitted until 2026-01-29 11:15:02.883243179 +0000 UTC m=+968.756277370 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift") pod "swift-storage-0" (UID: "307ad072-fdfc-4c55-8891-bc041d755b1a") : configmap "swift-ring-files" not found Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.383806 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.384143 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/307ad072-fdfc-4c55-8891-bc041d755b1a-cache\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.395757 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/307ad072-fdfc-4c55-8891-bc041d755b1a-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.399219 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" event={"ID":"8d624d92-85b0-48dc-94f4-047ac84aaa0c","Type":"ContainerStarted","Data":"c821139e8b0317636f7e45a909cbff9ea156a76bb671f91a36836e985d04e36c"} Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.399310 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" event={"ID":"8d624d92-85b0-48dc-94f4-047ac84aaa0c","Type":"ContainerStarted","Data":"e1ffc8f638f234f1e4b2a1ef92c4d24c5debc912008dd0a9b438d90833fbf3dc"} Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.402466 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" podUID="9288612d-73d6-410c-b109-9d3124e96f9c" containerName="dnsmasq-dns" containerID="cri-o://2ef61e3b91c1c3e6e252646d712ea2fdfcde408704d5e98a8540b0b3553ebe92" gracePeriod=10 Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.402926 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5zjts" podUID="80b1ef7b-9dfd-4910-99a8-830a1735fb79" containerName="registry-server" containerID="cri-o://77efa027816de776464e0940fd5bce08b6a4290d0af1ab6b28b714dc35a913be" gracePeriod=2 Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.420766 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.429025 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" podStartSLOduration=2.428997379 podStartE2EDuration="2.428997379s" podCreationTimestamp="2026-01-29 11:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:15:02.426613496 +0000 UTC m=+968.299647687" watchObservedRunningTime="2026-01-29 11:15:02.428997379 +0000 UTC m=+968.302031580" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.430522 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4pwv\" (UniqueName: \"kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-kube-api-access-k4pwv\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.569023 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lm2dg"] Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.900707 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: E0129 11:15:02.900907 4593 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 11:15:02 crc kubenswrapper[4593]: E0129 11:15:02.901103 4593 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 11:15:02 crc kubenswrapper[4593]: E0129 11:15:02.901154 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift podName:307ad072-fdfc-4c55-8891-bc041d755b1a nodeName:}" failed. No retries permitted until 2026-01-29 11:15:03.901138028 +0000 UTC m=+969.774172209 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift") pod "swift-storage-0" (UID: "307ad072-fdfc-4c55-8891-bc041d755b1a") : configmap "swift-ring-files" not found Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.006865 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.011614 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.103281 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80b1ef7b-9dfd-4910-99a8-830a1735fb79-catalog-content\") pod \"80b1ef7b-9dfd-4910-99a8-830a1735fb79\" (UID: \"80b1ef7b-9dfd-4910-99a8-830a1735fb79\") " Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.103528 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njvk6\" (UniqueName: \"kubernetes.io/projected/80b1ef7b-9dfd-4910-99a8-830a1735fb79-kube-api-access-njvk6\") pod \"80b1ef7b-9dfd-4910-99a8-830a1735fb79\" (UID: \"80b1ef7b-9dfd-4910-99a8-830a1735fb79\") " Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.103554 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xj4vf\" (UniqueName: \"kubernetes.io/projected/9288612d-73d6-410c-b109-9d3124e96f9c-kube-api-access-xj4vf\") pod \"9288612d-73d6-410c-b109-9d3124e96f9c\" (UID: \"9288612d-73d6-410c-b109-9d3124e96f9c\") " Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.103583 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-config\") pod \"9288612d-73d6-410c-b109-9d3124e96f9c\" (UID: \"9288612d-73d6-410c-b109-9d3124e96f9c\") " Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.103609 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-dns-svc\") pod \"9288612d-73d6-410c-b109-9d3124e96f9c\" (UID: \"9288612d-73d6-410c-b109-9d3124e96f9c\") " Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.103653 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80b1ef7b-9dfd-4910-99a8-830a1735fb79-utilities\") pod \"80b1ef7b-9dfd-4910-99a8-830a1735fb79\" (UID: \"80b1ef7b-9dfd-4910-99a8-830a1735fb79\") " Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.103685 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-ovsdbserver-sb\") pod \"9288612d-73d6-410c-b109-9d3124e96f9c\" (UID: \"9288612d-73d6-410c-b109-9d3124e96f9c\") " Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.107103 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80b1ef7b-9dfd-4910-99a8-830a1735fb79-utilities" (OuterVolumeSpecName: "utilities") pod "80b1ef7b-9dfd-4910-99a8-830a1735fb79" (UID: "80b1ef7b-9dfd-4910-99a8-830a1735fb79"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.112767 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9288612d-73d6-410c-b109-9d3124e96f9c-kube-api-access-xj4vf" (OuterVolumeSpecName: "kube-api-access-xj4vf") pod "9288612d-73d6-410c-b109-9d3124e96f9c" (UID: "9288612d-73d6-410c-b109-9d3124e96f9c"). InnerVolumeSpecName "kube-api-access-xj4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.164745 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80b1ef7b-9dfd-4910-99a8-830a1735fb79-kube-api-access-njvk6" (OuterVolumeSpecName: "kube-api-access-njvk6") pod "80b1ef7b-9dfd-4910-99a8-830a1735fb79" (UID: "80b1ef7b-9dfd-4910-99a8-830a1735fb79"). InnerVolumeSpecName "kube-api-access-njvk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.204835 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80b1ef7b-9dfd-4910-99a8-830a1735fb79-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80b1ef7b-9dfd-4910-99a8-830a1735fb79" (UID: "80b1ef7b-9dfd-4910-99a8-830a1735fb79"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.205883 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80b1ef7b-9dfd-4910-99a8-830a1735fb79-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.205896 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njvk6\" (UniqueName: \"kubernetes.io/projected/80b1ef7b-9dfd-4910-99a8-830a1735fb79-kube-api-access-njvk6\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.205907 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xj4vf\" (UniqueName: \"kubernetes.io/projected/9288612d-73d6-410c-b109-9d3124e96f9c-kube-api-access-xj4vf\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.205915 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80b1ef7b-9dfd-4910-99a8-830a1735fb79-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.208551 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-config" (OuterVolumeSpecName: "config") pod "9288612d-73d6-410c-b109-9d3124e96f9c" (UID: "9288612d-73d6-410c-b109-9d3124e96f9c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.209248 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9288612d-73d6-410c-b109-9d3124e96f9c" (UID: "9288612d-73d6-410c-b109-9d3124e96f9c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.265738 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9288612d-73d6-410c-b109-9d3124e96f9c" (UID: "9288612d-73d6-410c-b109-9d3124e96f9c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.307395 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.307418 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.307428 4593 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.411360 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5320cc21-470d-450c-afa0-c5926e3243c6","Type":"ContainerStarted","Data":"09f9724e79bce4ee329a8c8bec5b3420af1adbdb15836f3d8b44fdfd68055ebc"} Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.411413 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5320cc21-470d-450c-afa0-c5926e3243c6","Type":"ContainerStarted","Data":"2ed636bf32d447bd13812d8ebeaa5f27d6a5644f848884b286c9f4f83292c007"} Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.412568 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.415525 4593 generic.go:334] "Generic (PLEG): container finished" podID="80b1ef7b-9dfd-4910-99a8-830a1735fb79" containerID="77efa027816de776464e0940fd5bce08b6a4290d0af1ab6b28b714dc35a913be" exitCode=0 Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.415584 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.415673 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zjts" event={"ID":"80b1ef7b-9dfd-4910-99a8-830a1735fb79","Type":"ContainerDied","Data":"77efa027816de776464e0940fd5bce08b6a4290d0af1ab6b28b714dc35a913be"} Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.415716 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zjts" event={"ID":"80b1ef7b-9dfd-4910-99a8-830a1735fb79","Type":"ContainerDied","Data":"ade31aca7ba29e2371128a860beb89fe80c8c2fbd7528ceac5d2035097f7e6ad"} Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.415772 4593 scope.go:117] "RemoveContainer" containerID="77efa027816de776464e0940fd5bce08b6a4290d0af1ab6b28b714dc35a913be" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.418247 4593 generic.go:334] "Generic (PLEG): container finished" podID="8d624d92-85b0-48dc-94f4-047ac84aaa0c" containerID="c821139e8b0317636f7e45a909cbff9ea156a76bb671f91a36836e985d04e36c" exitCode=0 Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.418288 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" event={"ID":"8d624d92-85b0-48dc-94f4-047ac84aaa0c","Type":"ContainerDied","Data":"c821139e8b0317636f7e45a909cbff9ea156a76bb671f91a36836e985d04e36c"} Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.420199 4593 generic.go:334] "Generic (PLEG): container finished" podID="9288612d-73d6-410c-b109-9d3124e96f9c" containerID="2ef61e3b91c1c3e6e252646d712ea2fdfcde408704d5e98a8540b0b3553ebe92" exitCode=0 Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.420245 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" event={"ID":"9288612d-73d6-410c-b109-9d3124e96f9c","Type":"ContainerDied","Data":"2ef61e3b91c1c3e6e252646d712ea2fdfcde408704d5e98a8540b0b3553ebe92"} Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.420260 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" event={"ID":"9288612d-73d6-410c-b109-9d3124e96f9c","Type":"ContainerDied","Data":"55fb6adab579ff40463d7f5f9cf1505c1fa8ef85800ff903e67f7aacf830b70d"} Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.420272 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.421763 4593 generic.go:334] "Generic (PLEG): container finished" podID="1dc04f8a-c522-49b8-bdf6-59b7edad2d63" containerID="3a1884f5780e941a8c795fbe0356484ff14b38b8354e043148a53f7b7fef73d5" exitCode=0 Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.421788 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" event={"ID":"1dc04f8a-c522-49b8-bdf6-59b7edad2d63","Type":"ContainerDied","Data":"3a1884f5780e941a8c795fbe0356484ff14b38b8354e043148a53f7b7fef73d5"} Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.421803 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" event={"ID":"1dc04f8a-c522-49b8-bdf6-59b7edad2d63","Type":"ContainerStarted","Data":"2b0a11af2b235a2fb8adafd584c05dc53c5aec7086cbb35dcb104dd6b636f9bc"} Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.452045 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.151461438 podStartE2EDuration="5.45202704s" podCreationTimestamp="2026-01-29 11:14:58 +0000 UTC" firstStartedPulling="2026-01-29 11:14:59.751004436 +0000 UTC m=+965.624038627" lastFinishedPulling="2026-01-29 11:15:02.051570038 +0000 UTC m=+967.924604229" observedRunningTime="2026-01-29 11:15:03.441140529 +0000 UTC m=+969.314174720" watchObservedRunningTime="2026-01-29 11:15:03.45202704 +0000 UTC m=+969.325061231" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.455586 4593 scope.go:117] "RemoveContainer" containerID="9bb1171a6467cebf0bf64e79b5500c99261d694fa11543b2d01d7b0ddcbaec96" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.494513 4593 scope.go:117] "RemoveContainer" containerID="88f786f78b398f505ec5a44af965fed646d1e70bc02feb0e5bb5b6e39bfa9351" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.518077 4593 scope.go:117] "RemoveContainer" containerID="77efa027816de776464e0940fd5bce08b6a4290d0af1ab6b28b714dc35a913be" Jan 29 11:15:03 crc kubenswrapper[4593]: E0129 11:15:03.518482 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77efa027816de776464e0940fd5bce08b6a4290d0af1ab6b28b714dc35a913be\": container with ID starting with 77efa027816de776464e0940fd5bce08b6a4290d0af1ab6b28b714dc35a913be not found: ID does not exist" containerID="77efa027816de776464e0940fd5bce08b6a4290d0af1ab6b28b714dc35a913be" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.518587 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77efa027816de776464e0940fd5bce08b6a4290d0af1ab6b28b714dc35a913be"} err="failed to get container status \"77efa027816de776464e0940fd5bce08b6a4290d0af1ab6b28b714dc35a913be\": rpc error: code = NotFound desc = could not find container \"77efa027816de776464e0940fd5bce08b6a4290d0af1ab6b28b714dc35a913be\": container with ID starting with 77efa027816de776464e0940fd5bce08b6a4290d0af1ab6b28b714dc35a913be not found: ID does not exist" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.518617 4593 scope.go:117] "RemoveContainer" containerID="9bb1171a6467cebf0bf64e79b5500c99261d694fa11543b2d01d7b0ddcbaec96" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.519617 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-lw6d5"] Jan 29 11:15:03 crc kubenswrapper[4593]: E0129 11:15:03.520541 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bb1171a6467cebf0bf64e79b5500c99261d694fa11543b2d01d7b0ddcbaec96\": container with ID starting with 9bb1171a6467cebf0bf64e79b5500c99261d694fa11543b2d01d7b0ddcbaec96 not found: ID does not exist" containerID="9bb1171a6467cebf0bf64e79b5500c99261d694fa11543b2d01d7b0ddcbaec96" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.520649 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bb1171a6467cebf0bf64e79b5500c99261d694fa11543b2d01d7b0ddcbaec96"} err="failed to get container status \"9bb1171a6467cebf0bf64e79b5500c99261d694fa11543b2d01d7b0ddcbaec96\": rpc error: code = NotFound desc = could not find container \"9bb1171a6467cebf0bf64e79b5500c99261d694fa11543b2d01d7b0ddcbaec96\": container with ID starting with 9bb1171a6467cebf0bf64e79b5500c99261d694fa11543b2d01d7b0ddcbaec96 not found: ID does not exist" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.520669 4593 scope.go:117] "RemoveContainer" containerID="88f786f78b398f505ec5a44af965fed646d1e70bc02feb0e5bb5b6e39bfa9351" Jan 29 11:15:03 crc kubenswrapper[4593]: E0129 11:15:03.520930 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88f786f78b398f505ec5a44af965fed646d1e70bc02feb0e5bb5b6e39bfa9351\": container with ID starting with 88f786f78b398f505ec5a44af965fed646d1e70bc02feb0e5bb5b6e39bfa9351 not found: ID does not exist" containerID="88f786f78b398f505ec5a44af965fed646d1e70bc02feb0e5bb5b6e39bfa9351" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.520950 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88f786f78b398f505ec5a44af965fed646d1e70bc02feb0e5bb5b6e39bfa9351"} err="failed to get container status \"88f786f78b398f505ec5a44af965fed646d1e70bc02feb0e5bb5b6e39bfa9351\": rpc error: code = NotFound desc = could not find container \"88f786f78b398f505ec5a44af965fed646d1e70bc02feb0e5bb5b6e39bfa9351\": container with ID starting with 88f786f78b398f505ec5a44af965fed646d1e70bc02feb0e5bb5b6e39bfa9351 not found: ID does not exist" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.520962 4593 scope.go:117] "RemoveContainer" containerID="2ef61e3b91c1c3e6e252646d712ea2fdfcde408704d5e98a8540b0b3553ebe92" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.535125 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-lw6d5"] Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.543129 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5zjts"] Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.557610 4593 scope.go:117] "RemoveContainer" containerID="56f4c64d6413cc5bc4edfcf3047aa5b45a567cb527bc710b266d604cfb388597" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.559505 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5zjts"] Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.591120 4593 scope.go:117] "RemoveContainer" containerID="2ef61e3b91c1c3e6e252646d712ea2fdfcde408704d5e98a8540b0b3553ebe92" Jan 29 11:15:03 crc kubenswrapper[4593]: E0129 11:15:03.592123 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ef61e3b91c1c3e6e252646d712ea2fdfcde408704d5e98a8540b0b3553ebe92\": container with ID starting with 2ef61e3b91c1c3e6e252646d712ea2fdfcde408704d5e98a8540b0b3553ebe92 not found: ID does not exist" containerID="2ef61e3b91c1c3e6e252646d712ea2fdfcde408704d5e98a8540b0b3553ebe92" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.592160 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ef61e3b91c1c3e6e252646d712ea2fdfcde408704d5e98a8540b0b3553ebe92"} err="failed to get container status \"2ef61e3b91c1c3e6e252646d712ea2fdfcde408704d5e98a8540b0b3553ebe92\": rpc error: code = NotFound desc = could not find container \"2ef61e3b91c1c3e6e252646d712ea2fdfcde408704d5e98a8540b0b3553ebe92\": container with ID starting with 2ef61e3b91c1c3e6e252646d712ea2fdfcde408704d5e98a8540b0b3553ebe92 not found: ID does not exist" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.592189 4593 scope.go:117] "RemoveContainer" containerID="56f4c64d6413cc5bc4edfcf3047aa5b45a567cb527bc710b266d604cfb388597" Jan 29 11:15:03 crc kubenswrapper[4593]: E0129 11:15:03.592462 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56f4c64d6413cc5bc4edfcf3047aa5b45a567cb527bc710b266d604cfb388597\": container with ID starting with 56f4c64d6413cc5bc4edfcf3047aa5b45a567cb527bc710b266d604cfb388597 not found: ID does not exist" containerID="56f4c64d6413cc5bc4edfcf3047aa5b45a567cb527bc710b266d604cfb388597" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.592497 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56f4c64d6413cc5bc4edfcf3047aa5b45a567cb527bc710b266d604cfb388597"} err="failed to get container status \"56f4c64d6413cc5bc4edfcf3047aa5b45a567cb527bc710b266d604cfb388597\": rpc error: code = NotFound desc = could not find container \"56f4c64d6413cc5bc4edfcf3047aa5b45a567cb527bc710b266d604cfb388597\": container with ID starting with 56f4c64d6413cc5bc4edfcf3047aa5b45a567cb527bc710b266d604cfb388597 not found: ID does not exist" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.919581 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:03 crc kubenswrapper[4593]: E0129 11:15:03.919849 4593 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 11:15:03 crc kubenswrapper[4593]: E0129 11:15:03.920051 4593 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 11:15:03 crc kubenswrapper[4593]: E0129 11:15:03.920137 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift podName:307ad072-fdfc-4c55-8891-bc041d755b1a nodeName:}" failed. No retries permitted until 2026-01-29 11:15:05.92010146 +0000 UTC m=+971.793135651 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift") pod "swift-storage-0" (UID: "307ad072-fdfc-4c55-8891-bc041d755b1a") : configmap "swift-ring-files" not found Jan 29 11:15:04 crc kubenswrapper[4593]: I0129 11:15:04.436253 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" event={"ID":"1dc04f8a-c522-49b8-bdf6-59b7edad2d63","Type":"ContainerStarted","Data":"3463601aba040d487968e25f4e62ebe73e4169690defbbff65cdb06d70d88e14"} Jan 29 11:15:04 crc kubenswrapper[4593]: I0129 11:15:04.436701 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:04 crc kubenswrapper[4593]: I0129 11:15:04.860486 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" Jan 29 11:15:04 crc kubenswrapper[4593]: I0129 11:15:04.883060 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" podStartSLOduration=3.883039157 podStartE2EDuration="3.883039157s" podCreationTimestamp="2026-01-29 11:15:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:15:04.468052274 +0000 UTC m=+970.341086495" watchObservedRunningTime="2026-01-29 11:15:04.883039157 +0000 UTC m=+970.756073348" Jan 29 11:15:05 crc kubenswrapper[4593]: I0129 11:15:05.037900 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d624d92-85b0-48dc-94f4-047ac84aaa0c-config-volume\") pod \"8d624d92-85b0-48dc-94f4-047ac84aaa0c\" (UID: \"8d624d92-85b0-48dc-94f4-047ac84aaa0c\") " Jan 29 11:15:05 crc kubenswrapper[4593]: I0129 11:15:05.038042 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4g5j\" (UniqueName: \"kubernetes.io/projected/8d624d92-85b0-48dc-94f4-047ac84aaa0c-kube-api-access-j4g5j\") pod \"8d624d92-85b0-48dc-94f4-047ac84aaa0c\" (UID: \"8d624d92-85b0-48dc-94f4-047ac84aaa0c\") " Jan 29 11:15:05 crc kubenswrapper[4593]: I0129 11:15:05.038072 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d624d92-85b0-48dc-94f4-047ac84aaa0c-secret-volume\") pod \"8d624d92-85b0-48dc-94f4-047ac84aaa0c\" (UID: \"8d624d92-85b0-48dc-94f4-047ac84aaa0c\") " Jan 29 11:15:05 crc kubenswrapper[4593]: I0129 11:15:05.038745 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d624d92-85b0-48dc-94f4-047ac84aaa0c-config-volume" (OuterVolumeSpecName: "config-volume") pod "8d624d92-85b0-48dc-94f4-047ac84aaa0c" (UID: "8d624d92-85b0-48dc-94f4-047ac84aaa0c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:05 crc kubenswrapper[4593]: I0129 11:15:05.044758 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d624d92-85b0-48dc-94f4-047ac84aaa0c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8d624d92-85b0-48dc-94f4-047ac84aaa0c" (UID: "8d624d92-85b0-48dc-94f4-047ac84aaa0c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:15:05 crc kubenswrapper[4593]: I0129 11:15:05.060790 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d624d92-85b0-48dc-94f4-047ac84aaa0c-kube-api-access-j4g5j" (OuterVolumeSpecName: "kube-api-access-j4g5j") pod "8d624d92-85b0-48dc-94f4-047ac84aaa0c" (UID: "8d624d92-85b0-48dc-94f4-047ac84aaa0c"). InnerVolumeSpecName "kube-api-access-j4g5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:05 crc kubenswrapper[4593]: I0129 11:15:05.087068 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80b1ef7b-9dfd-4910-99a8-830a1735fb79" path="/var/lib/kubelet/pods/80b1ef7b-9dfd-4910-99a8-830a1735fb79/volumes" Jan 29 11:15:05 crc kubenswrapper[4593]: I0129 11:15:05.088346 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9288612d-73d6-410c-b109-9d3124e96f9c" path="/var/lib/kubelet/pods/9288612d-73d6-410c-b109-9d3124e96f9c/volumes" Jan 29 11:15:05 crc kubenswrapper[4593]: I0129 11:15:05.140297 4593 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d624d92-85b0-48dc-94f4-047ac84aaa0c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:05 crc kubenswrapper[4593]: I0129 11:15:05.140339 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4g5j\" (UniqueName: \"kubernetes.io/projected/8d624d92-85b0-48dc-94f4-047ac84aaa0c-kube-api-access-j4g5j\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:05 crc kubenswrapper[4593]: I0129 11:15:05.140356 4593 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d624d92-85b0-48dc-94f4-047ac84aaa0c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:05 crc kubenswrapper[4593]: I0129 11:15:05.448515 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" event={"ID":"8d624d92-85b0-48dc-94f4-047ac84aaa0c","Type":"ContainerDied","Data":"e1ffc8f638f234f1e4b2a1ef92c4d24c5debc912008dd0a9b438d90833fbf3dc"} Jan 29 11:15:05 crc kubenswrapper[4593]: I0129 11:15:05.448572 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1ffc8f638f234f1e4b2a1ef92c4d24c5debc912008dd0a9b438d90833fbf3dc" Jan 29 11:15:05 crc kubenswrapper[4593]: I0129 11:15:05.448734 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" Jan 29 11:15:05 crc kubenswrapper[4593]: I0129 11:15:05.951933 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:05 crc kubenswrapper[4593]: E0129 11:15:05.952133 4593 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 11:15:05 crc kubenswrapper[4593]: E0129 11:15:05.952308 4593 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 11:15:05 crc kubenswrapper[4593]: E0129 11:15:05.952352 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift podName:307ad072-fdfc-4c55-8891-bc041d755b1a nodeName:}" failed. No retries permitted until 2026-01-29 11:15:09.952338873 +0000 UTC m=+975.825373064 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift") pod "swift-storage-0" (UID: "307ad072-fdfc-4c55-8891-bc041d755b1a") : configmap "swift-ring-files" not found Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.034013 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-jbnzf"] Jan 29 11:15:06 crc kubenswrapper[4593]: E0129 11:15:06.034443 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80b1ef7b-9dfd-4910-99a8-830a1735fb79" containerName="registry-server" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.034467 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="80b1ef7b-9dfd-4910-99a8-830a1735fb79" containerName="registry-server" Jan 29 11:15:06 crc kubenswrapper[4593]: E0129 11:15:06.034483 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9288612d-73d6-410c-b109-9d3124e96f9c" containerName="init" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.034491 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="9288612d-73d6-410c-b109-9d3124e96f9c" containerName="init" Jan 29 11:15:06 crc kubenswrapper[4593]: E0129 11:15:06.034512 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9288612d-73d6-410c-b109-9d3124e96f9c" containerName="dnsmasq-dns" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.034521 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="9288612d-73d6-410c-b109-9d3124e96f9c" containerName="dnsmasq-dns" Jan 29 11:15:06 crc kubenswrapper[4593]: E0129 11:15:06.034534 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80b1ef7b-9dfd-4910-99a8-830a1735fb79" containerName="extract-utilities" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.034543 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="80b1ef7b-9dfd-4910-99a8-830a1735fb79" containerName="extract-utilities" Jan 29 11:15:06 crc kubenswrapper[4593]: E0129 11:15:06.034558 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80b1ef7b-9dfd-4910-99a8-830a1735fb79" containerName="extract-content" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.034566 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="80b1ef7b-9dfd-4910-99a8-830a1735fb79" containerName="extract-content" Jan 29 11:15:06 crc kubenswrapper[4593]: E0129 11:15:06.034579 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d624d92-85b0-48dc-94f4-047ac84aaa0c" containerName="collect-profiles" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.034586 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d624d92-85b0-48dc-94f4-047ac84aaa0c" containerName="collect-profiles" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.034785 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d624d92-85b0-48dc-94f4-047ac84aaa0c" containerName="collect-profiles" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.034798 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="80b1ef7b-9dfd-4910-99a8-830a1735fb79" containerName="registry-server" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.034811 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="9288612d-73d6-410c-b109-9d3124e96f9c" containerName="dnsmasq-dns" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.035295 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.037660 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.037798 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.037924 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.050661 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-jbnzf"] Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.154588 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-dispersionconf\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.154771 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4d1e7e96-e120-43f1-bff0-ea3d624e621b-etc-swift\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.154885 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-swiftconf\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.154913 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8mgf\" (UniqueName: \"kubernetes.io/projected/4d1e7e96-e120-43f1-bff0-ea3d624e621b-kube-api-access-k8mgf\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.155061 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d1e7e96-e120-43f1-bff0-ea3d624e621b-scripts\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.155105 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4d1e7e96-e120-43f1-bff0-ea3d624e621b-ring-data-devices\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.155189 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-combined-ca-bundle\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.257169 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d1e7e96-e120-43f1-bff0-ea3d624e621b-scripts\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.257207 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4d1e7e96-e120-43f1-bff0-ea3d624e621b-ring-data-devices\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.257243 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-combined-ca-bundle\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.257343 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-dispersionconf\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.257370 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4d1e7e96-e120-43f1-bff0-ea3d624e621b-etc-swift\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.257415 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-swiftconf\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.257429 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8mgf\" (UniqueName: \"kubernetes.io/projected/4d1e7e96-e120-43f1-bff0-ea3d624e621b-kube-api-access-k8mgf\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.259166 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d1e7e96-e120-43f1-bff0-ea3d624e621b-scripts\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.259572 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4d1e7e96-e120-43f1-bff0-ea3d624e621b-ring-data-devices\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.260380 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4d1e7e96-e120-43f1-bff0-ea3d624e621b-etc-swift\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.263793 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-combined-ca-bundle\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.264059 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-swiftconf\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.277053 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-dispersionconf\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.278353 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8mgf\" (UniqueName: \"kubernetes.io/projected/4d1e7e96-e120-43f1-bff0-ea3d624e621b-kube-api-access-k8mgf\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.352000 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: W0129 11:15:06.807155 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d1e7e96_e120_43f1_bff0_ea3d624e621b.slice/crio-d77f0fd952398dea26e9f4a4bd94e337070014de0b7d5f082920e95b0dabccb6 WatchSource:0}: Error finding container d77f0fd952398dea26e9f4a4bd94e337070014de0b7d5f082920e95b0dabccb6: Status 404 returned error can't find the container with id d77f0fd952398dea26e9f4a4bd94e337070014de0b7d5f082920e95b0dabccb6 Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.809301 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-jbnzf"] Jan 29 11:15:07 crc kubenswrapper[4593]: I0129 11:15:07.145016 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:15:07 crc kubenswrapper[4593]: I0129 11:15:07.307523 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-87bhd"] Jan 29 11:15:07 crc kubenswrapper[4593]: I0129 11:15:07.309079 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-87bhd" Jan 29 11:15:07 crc kubenswrapper[4593]: I0129 11:15:07.324378 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-87bhd"] Jan 29 11:15:07 crc kubenswrapper[4593]: I0129 11:15:07.328727 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 29 11:15:07 crc kubenswrapper[4593]: I0129 11:15:07.472057 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jbnzf" event={"ID":"4d1e7e96-e120-43f1-bff0-ea3d624e621b","Type":"ContainerStarted","Data":"d77f0fd952398dea26e9f4a4bd94e337070014de0b7d5f082920e95b0dabccb6"} Jan 29 11:15:07 crc kubenswrapper[4593]: I0129 11:15:07.480111 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8a9eb9e-18f2-4150-973c-2e7baaca3484-operator-scripts\") pod \"root-account-create-update-87bhd\" (UID: \"d8a9eb9e-18f2-4150-973c-2e7baaca3484\") " pod="openstack/root-account-create-update-87bhd" Jan 29 11:15:07 crc kubenswrapper[4593]: I0129 11:15:07.480294 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn2fk\" (UniqueName: \"kubernetes.io/projected/d8a9eb9e-18f2-4150-973c-2e7baaca3484-kube-api-access-qn2fk\") pod \"root-account-create-update-87bhd\" (UID: \"d8a9eb9e-18f2-4150-973c-2e7baaca3484\") " pod="openstack/root-account-create-update-87bhd" Jan 29 11:15:07 crc kubenswrapper[4593]: I0129 11:15:07.521959 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 29 11:15:07 crc kubenswrapper[4593]: I0129 11:15:07.522047 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 29 11:15:07 crc kubenswrapper[4593]: I0129 11:15:07.582572 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8a9eb9e-18f2-4150-973c-2e7baaca3484-operator-scripts\") pod \"root-account-create-update-87bhd\" (UID: \"d8a9eb9e-18f2-4150-973c-2e7baaca3484\") " pod="openstack/root-account-create-update-87bhd" Jan 29 11:15:07 crc kubenswrapper[4593]: I0129 11:15:07.582727 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn2fk\" (UniqueName: \"kubernetes.io/projected/d8a9eb9e-18f2-4150-973c-2e7baaca3484-kube-api-access-qn2fk\") pod \"root-account-create-update-87bhd\" (UID: \"d8a9eb9e-18f2-4150-973c-2e7baaca3484\") " pod="openstack/root-account-create-update-87bhd" Jan 29 11:15:07 crc kubenswrapper[4593]: I0129 11:15:07.583590 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8a9eb9e-18f2-4150-973c-2e7baaca3484-operator-scripts\") pod \"root-account-create-update-87bhd\" (UID: \"d8a9eb9e-18f2-4150-973c-2e7baaca3484\") " pod="openstack/root-account-create-update-87bhd" Jan 29 11:15:07 crc kubenswrapper[4593]: I0129 11:15:07.616960 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 29 11:15:07 crc kubenswrapper[4593]: I0129 11:15:07.617722 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn2fk\" (UniqueName: \"kubernetes.io/projected/d8a9eb9e-18f2-4150-973c-2e7baaca3484-kube-api-access-qn2fk\") pod \"root-account-create-update-87bhd\" (UID: \"d8a9eb9e-18f2-4150-973c-2e7baaca3484\") " pod="openstack/root-account-create-update-87bhd" Jan 29 11:15:07 crc kubenswrapper[4593]: I0129 11:15:07.630953 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-87bhd" Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.098453 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-87bhd"] Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.356024 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.441466 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.491556 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-87bhd" event={"ID":"d8a9eb9e-18f2-4150-973c-2e7baaca3484","Type":"ContainerStarted","Data":"2d726601a06f0f3b078ac9cfab32d3c08235958370c6a2e0cae055cc410e3e0d"} Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.491601 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-87bhd" event={"ID":"d8a9eb9e-18f2-4150-973c-2e7baaca3484","Type":"ContainerStarted","Data":"2d499c9f38de6188424842997bab2cb4adbe4ba156fe5f3bb80b847c37491bff"} Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.513783 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-87bhd" podStartSLOduration=1.513761806 podStartE2EDuration="1.513761806s" podCreationTimestamp="2026-01-29 11:15:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:15:08.509861022 +0000 UTC m=+974.382895213" watchObservedRunningTime="2026-01-29 11:15:08.513761806 +0000 UTC m=+974.386795997" Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.582184 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.600203 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnrxg"] Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.798098 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-c4fzt"] Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.799065 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-c4fzt" Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.812339 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-c4fzt"] Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.918262 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb945\" (UniqueName: \"kubernetes.io/projected/fdb1fb5b-1dc7-487a-b49d-d542eef7af31-kube-api-access-qb945\") pod \"placement-db-create-c4fzt\" (UID: \"fdb1fb5b-1dc7-487a-b49d-d542eef7af31\") " pod="openstack/placement-db-create-c4fzt" Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.918463 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdb1fb5b-1dc7-487a-b49d-d542eef7af31-operator-scripts\") pod \"placement-db-create-c4fzt\" (UID: \"fdb1fb5b-1dc7-487a-b49d-d542eef7af31\") " pod="openstack/placement-db-create-c4fzt" Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.926288 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c3a7-account-create-update-9b49r"] Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.927338 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c3a7-account-create-update-9b49r" Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.929400 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.952390 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c3a7-account-create-update-9b49r"] Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.020027 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb945\" (UniqueName: \"kubernetes.io/projected/fdb1fb5b-1dc7-487a-b49d-d542eef7af31-kube-api-access-qb945\") pod \"placement-db-create-c4fzt\" (UID: \"fdb1fb5b-1dc7-487a-b49d-d542eef7af31\") " pod="openstack/placement-db-create-c4fzt" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.020092 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlqs8\" (UniqueName: \"kubernetes.io/projected/f2eab48b-4545-4fa3-81f1-6247ebcf425e-kube-api-access-zlqs8\") pod \"placement-c3a7-account-create-update-9b49r\" (UID: \"f2eab48b-4545-4fa3-81f1-6247ebcf425e\") " pod="openstack/placement-c3a7-account-create-update-9b49r" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.020123 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2eab48b-4545-4fa3-81f1-6247ebcf425e-operator-scripts\") pod \"placement-c3a7-account-create-update-9b49r\" (UID: \"f2eab48b-4545-4fa3-81f1-6247ebcf425e\") " pod="openstack/placement-c3a7-account-create-update-9b49r" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.020183 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdb1fb5b-1dc7-487a-b49d-d542eef7af31-operator-scripts\") pod \"placement-db-create-c4fzt\" (UID: \"fdb1fb5b-1dc7-487a-b49d-d542eef7af31\") " pod="openstack/placement-db-create-c4fzt" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.021262 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdb1fb5b-1dc7-487a-b49d-d542eef7af31-operator-scripts\") pod \"placement-db-create-c4fzt\" (UID: \"fdb1fb5b-1dc7-487a-b49d-d542eef7af31\") " pod="openstack/placement-db-create-c4fzt" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.073485 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb945\" (UniqueName: \"kubernetes.io/projected/fdb1fb5b-1dc7-487a-b49d-d542eef7af31-kube-api-access-qb945\") pod \"placement-db-create-c4fzt\" (UID: \"fdb1fb5b-1dc7-487a-b49d-d542eef7af31\") " pod="openstack/placement-db-create-c4fzt" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.124764 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-c4fzt" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.131264 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlqs8\" (UniqueName: \"kubernetes.io/projected/f2eab48b-4545-4fa3-81f1-6247ebcf425e-kube-api-access-zlqs8\") pod \"placement-c3a7-account-create-update-9b49r\" (UID: \"f2eab48b-4545-4fa3-81f1-6247ebcf425e\") " pod="openstack/placement-c3a7-account-create-update-9b49r" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.131358 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2eab48b-4545-4fa3-81f1-6247ebcf425e-operator-scripts\") pod \"placement-c3a7-account-create-update-9b49r\" (UID: \"f2eab48b-4545-4fa3-81f1-6247ebcf425e\") " pod="openstack/placement-c3a7-account-create-update-9b49r" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.135305 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2eab48b-4545-4fa3-81f1-6247ebcf425e-operator-scripts\") pod \"placement-c3a7-account-create-update-9b49r\" (UID: \"f2eab48b-4545-4fa3-81f1-6247ebcf425e\") " pod="openstack/placement-c3a7-account-create-update-9b49r" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.152818 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlqs8\" (UniqueName: \"kubernetes.io/projected/f2eab48b-4545-4fa3-81f1-6247ebcf425e-kube-api-access-zlqs8\") pod \"placement-c3a7-account-create-update-9b49r\" (UID: \"f2eab48b-4545-4fa3-81f1-6247ebcf425e\") " pod="openstack/placement-c3a7-account-create-update-9b49r" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.252983 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c3a7-account-create-update-9b49r" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.376055 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-cjzzm"] Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.377322 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cjzzm" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.385648 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-cjzzm"] Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.490678 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-70b0-account-create-update-c8qbm"] Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.491893 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-70b0-account-create-update-c8qbm" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.499397 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.500204 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-70b0-account-create-update-c8qbm"] Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.514522 4593 generic.go:334] "Generic (PLEG): container finished" podID="d8a9eb9e-18f2-4150-973c-2e7baaca3484" containerID="2d726601a06f0f3b078ac9cfab32d3c08235958370c6a2e0cae055cc410e3e0d" exitCode=0 Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.515521 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-87bhd" event={"ID":"d8a9eb9e-18f2-4150-973c-2e7baaca3484","Type":"ContainerDied","Data":"2d726601a06f0f3b078ac9cfab32d3c08235958370c6a2e0cae055cc410e3e0d"} Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.515736 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hnrxg" podUID="ba99bea9-cf82-4eb7-8c7b-f171c534fc62" containerName="registry-server" containerID="cri-o://4895474b2f5eeb052b2d990d58ef03a99f4466ec22ffd294eacac21fca622134" gracePeriod=2 Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.538492 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2687b78-f425-4fae-9af8-7021f3e01e36-operator-scripts\") pod \"glance-db-create-cjzzm\" (UID: \"e2687b78-f425-4fae-9af8-7021f3e01e36\") " pod="openstack/glance-db-create-cjzzm" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.538611 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spkhz\" (UniqueName: \"kubernetes.io/projected/e2687b78-f425-4fae-9af8-7021f3e01e36-kube-api-access-spkhz\") pod \"glance-db-create-cjzzm\" (UID: \"e2687b78-f425-4fae-9af8-7021f3e01e36\") " pod="openstack/glance-db-create-cjzzm" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.639714 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2687b78-f425-4fae-9af8-7021f3e01e36-operator-scripts\") pod \"glance-db-create-cjzzm\" (UID: \"e2687b78-f425-4fae-9af8-7021f3e01e36\") " pod="openstack/glance-db-create-cjzzm" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.639811 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrt45\" (UniqueName: \"kubernetes.io/projected/3b4524da-e80b-4bd2-a116-061694417007-kube-api-access-zrt45\") pod \"glance-70b0-account-create-update-c8qbm\" (UID: \"3b4524da-e80b-4bd2-a116-061694417007\") " pod="openstack/glance-70b0-account-create-update-c8qbm" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.639863 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spkhz\" (UniqueName: \"kubernetes.io/projected/e2687b78-f425-4fae-9af8-7021f3e01e36-kube-api-access-spkhz\") pod \"glance-db-create-cjzzm\" (UID: \"e2687b78-f425-4fae-9af8-7021f3e01e36\") " pod="openstack/glance-db-create-cjzzm" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.639914 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b4524da-e80b-4bd2-a116-061694417007-operator-scripts\") pod \"glance-70b0-account-create-update-c8qbm\" (UID: \"3b4524da-e80b-4bd2-a116-061694417007\") " pod="openstack/glance-70b0-account-create-update-c8qbm" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.640706 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2687b78-f425-4fae-9af8-7021f3e01e36-operator-scripts\") pod \"glance-db-create-cjzzm\" (UID: \"e2687b78-f425-4fae-9af8-7021f3e01e36\") " pod="openstack/glance-db-create-cjzzm" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.671397 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spkhz\" (UniqueName: \"kubernetes.io/projected/e2687b78-f425-4fae-9af8-7021f3e01e36-kube-api-access-spkhz\") pod \"glance-db-create-cjzzm\" (UID: \"e2687b78-f425-4fae-9af8-7021f3e01e36\") " pod="openstack/glance-db-create-cjzzm" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.695227 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cjzzm" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.741050 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b4524da-e80b-4bd2-a116-061694417007-operator-scripts\") pod \"glance-70b0-account-create-update-c8qbm\" (UID: \"3b4524da-e80b-4bd2-a116-061694417007\") " pod="openstack/glance-70b0-account-create-update-c8qbm" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.741192 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrt45\" (UniqueName: \"kubernetes.io/projected/3b4524da-e80b-4bd2-a116-061694417007-kube-api-access-zrt45\") pod \"glance-70b0-account-create-update-c8qbm\" (UID: \"3b4524da-e80b-4bd2-a116-061694417007\") " pod="openstack/glance-70b0-account-create-update-c8qbm" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.741892 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b4524da-e80b-4bd2-a116-061694417007-operator-scripts\") pod \"glance-70b0-account-create-update-c8qbm\" (UID: \"3b4524da-e80b-4bd2-a116-061694417007\") " pod="openstack/glance-70b0-account-create-update-c8qbm" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.763398 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrt45\" (UniqueName: \"kubernetes.io/projected/3b4524da-e80b-4bd2-a116-061694417007-kube-api-access-zrt45\") pod \"glance-70b0-account-create-update-c8qbm\" (UID: \"3b4524da-e80b-4bd2-a116-061694417007\") " pod="openstack/glance-70b0-account-create-update-c8qbm" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.816398 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-70b0-account-create-update-c8qbm" Jan 29 11:15:10 crc kubenswrapper[4593]: I0129 11:15:10.044797 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:10 crc kubenswrapper[4593]: E0129 11:15:10.045395 4593 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 11:15:10 crc kubenswrapper[4593]: E0129 11:15:10.045415 4593 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 11:15:10 crc kubenswrapper[4593]: E0129 11:15:10.045481 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift podName:307ad072-fdfc-4c55-8891-bc041d755b1a nodeName:}" failed. No retries permitted until 2026-01-29 11:15:18.04546257 +0000 UTC m=+983.918496761 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift") pod "swift-storage-0" (UID: "307ad072-fdfc-4c55-8891-bc041d755b1a") : configmap "swift-ring-files" not found Jan 29 11:15:10 crc kubenswrapper[4593]: I0129 11:15:10.527875 4593 generic.go:334] "Generic (PLEG): container finished" podID="ba99bea9-cf82-4eb7-8c7b-f171c534fc62" containerID="4895474b2f5eeb052b2d990d58ef03a99f4466ec22ffd294eacac21fca622134" exitCode=0 Jan 29 11:15:10 crc kubenswrapper[4593]: I0129 11:15:10.528194 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnrxg" event={"ID":"ba99bea9-cf82-4eb7-8c7b-f171c534fc62","Type":"ContainerDied","Data":"4895474b2f5eeb052b2d990d58ef03a99f4466ec22ffd294eacac21fca622134"} Jan 29 11:15:10 crc kubenswrapper[4593]: I0129 11:15:10.914975 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 11:15:11 crc kubenswrapper[4593]: I0129 11:15:11.511409 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:11 crc kubenswrapper[4593]: I0129 11:15:11.607241 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-cgm9z"] Jan 29 11:15:11 crc kubenswrapper[4593]: I0129 11:15:11.607495 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-cgm9z" podUID="ba134367-9e72-466a-8aa3-0bda1deb7791" containerName="dnsmasq-dns" containerID="cri-o://16c330099663087d1ad14f43dde6f6b5da97e137920d113a4cc68d120af8d43a" gracePeriod=10 Jan 29 11:15:11 crc kubenswrapper[4593]: I0129 11:15:11.722549 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-87bhd" Jan 29 11:15:11 crc kubenswrapper[4593]: I0129 11:15:11.776249 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8a9eb9e-18f2-4150-973c-2e7baaca3484-operator-scripts\") pod \"d8a9eb9e-18f2-4150-973c-2e7baaca3484\" (UID: \"d8a9eb9e-18f2-4150-973c-2e7baaca3484\") " Jan 29 11:15:11 crc kubenswrapper[4593]: I0129 11:15:11.776336 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn2fk\" (UniqueName: \"kubernetes.io/projected/d8a9eb9e-18f2-4150-973c-2e7baaca3484-kube-api-access-qn2fk\") pod \"d8a9eb9e-18f2-4150-973c-2e7baaca3484\" (UID: \"d8a9eb9e-18f2-4150-973c-2e7baaca3484\") " Jan 29 11:15:11 crc kubenswrapper[4593]: I0129 11:15:11.777797 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8a9eb9e-18f2-4150-973c-2e7baaca3484-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d8a9eb9e-18f2-4150-973c-2e7baaca3484" (UID: "d8a9eb9e-18f2-4150-973c-2e7baaca3484"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:11 crc kubenswrapper[4593]: I0129 11:15:11.796181 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8a9eb9e-18f2-4150-973c-2e7baaca3484-kube-api-access-qn2fk" (OuterVolumeSpecName: "kube-api-access-qn2fk") pod "d8a9eb9e-18f2-4150-973c-2e7baaca3484" (UID: "d8a9eb9e-18f2-4150-973c-2e7baaca3484"). InnerVolumeSpecName "kube-api-access-qn2fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:11 crc kubenswrapper[4593]: I0129 11:15:11.878864 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8a9eb9e-18f2-4150-973c-2e7baaca3484-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:11 crc kubenswrapper[4593]: I0129 11:15:11.879231 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qn2fk\" (UniqueName: \"kubernetes.io/projected/d8a9eb9e-18f2-4150-973c-2e7baaca3484-kube-api-access-qn2fk\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:11 crc kubenswrapper[4593]: I0129 11:15:11.887988 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:15:11 crc kubenswrapper[4593]: I0129 11:15:11.983134 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-utilities\") pod \"ba99bea9-cf82-4eb7-8c7b-f171c534fc62\" (UID: \"ba99bea9-cf82-4eb7-8c7b-f171c534fc62\") " Jan 29 11:15:11 crc kubenswrapper[4593]: I0129 11:15:11.983173 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-catalog-content\") pod \"ba99bea9-cf82-4eb7-8c7b-f171c534fc62\" (UID: \"ba99bea9-cf82-4eb7-8c7b-f171c534fc62\") " Jan 29 11:15:11 crc kubenswrapper[4593]: I0129 11:15:11.983258 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgv62\" (UniqueName: \"kubernetes.io/projected/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-kube-api-access-jgv62\") pod \"ba99bea9-cf82-4eb7-8c7b-f171c534fc62\" (UID: \"ba99bea9-cf82-4eb7-8c7b-f171c534fc62\") " Jan 29 11:15:11 crc kubenswrapper[4593]: I0129 11:15:11.984503 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-utilities" (OuterVolumeSpecName: "utilities") pod "ba99bea9-cf82-4eb7-8c7b-f171c534fc62" (UID: "ba99bea9-cf82-4eb7-8c7b-f171c534fc62"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.000111 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-kube-api-access-jgv62" (OuterVolumeSpecName: "kube-api-access-jgv62") pod "ba99bea9-cf82-4eb7-8c7b-f171c534fc62" (UID: "ba99bea9-cf82-4eb7-8c7b-f171c534fc62"). InnerVolumeSpecName "kube-api-access-jgv62". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.073166 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba99bea9-cf82-4eb7-8c7b-f171c534fc62" (UID: "ba99bea9-cf82-4eb7-8c7b-f171c534fc62"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.085034 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.085062 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.085073 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgv62\" (UniqueName: \"kubernetes.io/projected/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-kube-api-access-jgv62\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.144431 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-cgm9z" podUID="ba134367-9e72-466a-8aa3-0bda1deb7791" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.108:5353: connect: connection refused" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.422307 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-cjzzm"] Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.453223 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.557567 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cjzzm" event={"ID":"e2687b78-f425-4fae-9af8-7021f3e01e36","Type":"ContainerStarted","Data":"69543955059b6a02d7efbea367354349bec1818ede0d3acfb63fa9c3aa6c1a0a"} Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.559975 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnrxg" event={"ID":"ba99bea9-cf82-4eb7-8c7b-f171c534fc62","Type":"ContainerDied","Data":"d1f4402fb69794a1a6deb77fd346981fb6d8f2b3bd7eaaad3126ed929b264e54"} Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.560008 4593 scope.go:117] "RemoveContainer" containerID="4895474b2f5eeb052b2d990d58ef03a99f4466ec22ffd294eacac21fca622134" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.560115 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.565473 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jbnzf" event={"ID":"4d1e7e96-e120-43f1-bff0-ea3d624e621b","Type":"ContainerStarted","Data":"9ea8033b0ead06e96b066f4d434b2b21ca12373b475b3c1f489d3e7beb1ea468"} Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.582891 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-87bhd" event={"ID":"d8a9eb9e-18f2-4150-973c-2e7baaca3484","Type":"ContainerDied","Data":"2d499c9f38de6188424842997bab2cb4adbe4ba156fe5f3bb80b847c37491bff"} Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.582932 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d499c9f38de6188424842997bab2cb4adbe4ba156fe5f3bb80b847c37491bff" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.582987 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-87bhd" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.590978 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c3a7-account-create-update-9b49r"] Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.592216 4593 generic.go:334] "Generic (PLEG): container finished" podID="ba134367-9e72-466a-8aa3-0bda1deb7791" containerID="16c330099663087d1ad14f43dde6f6b5da97e137920d113a4cc68d120af8d43a" exitCode=0 Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.592251 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-cgm9z" event={"ID":"ba134367-9e72-466a-8aa3-0bda1deb7791","Type":"ContainerDied","Data":"16c330099663087d1ad14f43dde6f6b5da97e137920d113a4cc68d120af8d43a"} Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.592310 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-cgm9z" event={"ID":"ba134367-9e72-466a-8aa3-0bda1deb7791","Type":"ContainerDied","Data":"03a28ce5a42adf28e21bd51fb0ee9216c7ab5bdb7d9e843e28d1f210295085a6"} Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.592368 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.606163 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99nm4\" (UniqueName: \"kubernetes.io/projected/ba134367-9e72-466a-8aa3-0bda1deb7791-kube-api-access-99nm4\") pod \"ba134367-9e72-466a-8aa3-0bda1deb7791\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.606227 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-dns-svc\") pod \"ba134367-9e72-466a-8aa3-0bda1deb7791\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.606250 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-ovsdbserver-nb\") pod \"ba134367-9e72-466a-8aa3-0bda1deb7791\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.606335 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-config\") pod \"ba134367-9e72-466a-8aa3-0bda1deb7791\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.606403 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-ovsdbserver-sb\") pod \"ba134367-9e72-466a-8aa3-0bda1deb7791\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.613182 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-70b0-account-create-update-c8qbm"] Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.623055 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-jbnzf" podStartSLOduration=1.573929867 podStartE2EDuration="6.623031265s" podCreationTimestamp="2026-01-29 11:15:06 +0000 UTC" firstStartedPulling="2026-01-29 11:15:06.809385144 +0000 UTC m=+972.682419335" lastFinishedPulling="2026-01-29 11:15:11.858486542 +0000 UTC m=+977.731520733" observedRunningTime="2026-01-29 11:15:12.607452329 +0000 UTC m=+978.480486530" watchObservedRunningTime="2026-01-29 11:15:12.623031265 +0000 UTC m=+978.496065456" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.626180 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba134367-9e72-466a-8aa3-0bda1deb7791-kube-api-access-99nm4" (OuterVolumeSpecName: "kube-api-access-99nm4") pod "ba134367-9e72-466a-8aa3-0bda1deb7791" (UID: "ba134367-9e72-466a-8aa3-0bda1deb7791"). InnerVolumeSpecName "kube-api-access-99nm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:12 crc kubenswrapper[4593]: W0129 11:15:12.640744 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2eab48b_4545_4fa3_81f1_6247ebcf425e.slice/crio-b82eec590832523688db0a6968a160c841e0e9d79bb0cf3ff1d1a27dc55df876 WatchSource:0}: Error finding container b82eec590832523688db0a6968a160c841e0e9d79bb0cf3ff1d1a27dc55df876: Status 404 returned error can't find the container with id b82eec590832523688db0a6968a160c841e0e9d79bb0cf3ff1d1a27dc55df876 Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.657338 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-c4fzt"] Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.709011 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99nm4\" (UniqueName: \"kubernetes.io/projected/ba134367-9e72-466a-8aa3-0bda1deb7791-kube-api-access-99nm4\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.786029 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-config" (OuterVolumeSpecName: "config") pod "ba134367-9e72-466a-8aa3-0bda1deb7791" (UID: "ba134367-9e72-466a-8aa3-0bda1deb7791"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.792191 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ba134367-9e72-466a-8aa3-0bda1deb7791" (UID: "ba134367-9e72-466a-8aa3-0bda1deb7791"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.815503 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.815560 4593 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.816858 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ba134367-9e72-466a-8aa3-0bda1deb7791" (UID: "ba134367-9e72-466a-8aa3-0bda1deb7791"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.819189 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ba134367-9e72-466a-8aa3-0bda1deb7791" (UID: "ba134367-9e72-466a-8aa3-0bda1deb7791"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.877249 4593 scope.go:117] "RemoveContainer" containerID="af838fa010c8947df25073166fa4b7b48c902b1c9dfcc02609c3d4b2597c538c" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.910747 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnrxg"] Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.916881 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.916911 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.929309 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnrxg"] Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.940844 4593 scope.go:117] "RemoveContainer" containerID="cd84694d15788663bcca8f1cea58b3f9c8ab044022df23a01ee0a17afa892276" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.969465 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-cgm9z"] Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.975404 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-cgm9z"] Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.980837 4593 scope.go:117] "RemoveContainer" containerID="16c330099663087d1ad14f43dde6f6b5da97e137920d113a4cc68d120af8d43a" Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.002390 4593 scope.go:117] "RemoveContainer" containerID="42e3e46a82a979e0d389f47be7049e973bc55893fd804a529a847013351b7e9c" Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.020948 4593 scope.go:117] "RemoveContainer" containerID="16c330099663087d1ad14f43dde6f6b5da97e137920d113a4cc68d120af8d43a" Jan 29 11:15:13 crc kubenswrapper[4593]: E0129 11:15:13.021280 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16c330099663087d1ad14f43dde6f6b5da97e137920d113a4cc68d120af8d43a\": container with ID starting with 16c330099663087d1ad14f43dde6f6b5da97e137920d113a4cc68d120af8d43a not found: ID does not exist" containerID="16c330099663087d1ad14f43dde6f6b5da97e137920d113a4cc68d120af8d43a" Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.021309 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16c330099663087d1ad14f43dde6f6b5da97e137920d113a4cc68d120af8d43a"} err="failed to get container status \"16c330099663087d1ad14f43dde6f6b5da97e137920d113a4cc68d120af8d43a\": rpc error: code = NotFound desc = could not find container \"16c330099663087d1ad14f43dde6f6b5da97e137920d113a4cc68d120af8d43a\": container with ID starting with 16c330099663087d1ad14f43dde6f6b5da97e137920d113a4cc68d120af8d43a not found: ID does not exist" Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.021335 4593 scope.go:117] "RemoveContainer" containerID="42e3e46a82a979e0d389f47be7049e973bc55893fd804a529a847013351b7e9c" Jan 29 11:15:13 crc kubenswrapper[4593]: E0129 11:15:13.021704 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42e3e46a82a979e0d389f47be7049e973bc55893fd804a529a847013351b7e9c\": container with ID starting with 42e3e46a82a979e0d389f47be7049e973bc55893fd804a529a847013351b7e9c not found: ID does not exist" containerID="42e3e46a82a979e0d389f47be7049e973bc55893fd804a529a847013351b7e9c" Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.021731 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42e3e46a82a979e0d389f47be7049e973bc55893fd804a529a847013351b7e9c"} err="failed to get container status \"42e3e46a82a979e0d389f47be7049e973bc55893fd804a529a847013351b7e9c\": rpc error: code = NotFound desc = could not find container \"42e3e46a82a979e0d389f47be7049e973bc55893fd804a529a847013351b7e9c\": container with ID starting with 42e3e46a82a979e0d389f47be7049e973bc55893fd804a529a847013351b7e9c not found: ID does not exist" Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.085528 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba134367-9e72-466a-8aa3-0bda1deb7791" path="/var/lib/kubelet/pods/ba134367-9e72-466a-8aa3-0bda1deb7791/volumes" Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.086403 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba99bea9-cf82-4eb7-8c7b-f171c534fc62" path="/var/lib/kubelet/pods/ba99bea9-cf82-4eb7-8c7b-f171c534fc62/volumes" Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.603084 4593 generic.go:334] "Generic (PLEG): container finished" podID="f2eab48b-4545-4fa3-81f1-6247ebcf425e" containerID="f4b832d6a02cddde771b6eeb4da2b7e8c024cb3a623b350dff1e411d17b9ecfd" exitCode=0 Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.603150 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c3a7-account-create-update-9b49r" event={"ID":"f2eab48b-4545-4fa3-81f1-6247ebcf425e","Type":"ContainerDied","Data":"f4b832d6a02cddde771b6eeb4da2b7e8c024cb3a623b350dff1e411d17b9ecfd"} Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.603177 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c3a7-account-create-update-9b49r" event={"ID":"f2eab48b-4545-4fa3-81f1-6247ebcf425e","Type":"ContainerStarted","Data":"b82eec590832523688db0a6968a160c841e0e9d79bb0cf3ff1d1a27dc55df876"} Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.605850 4593 generic.go:334] "Generic (PLEG): container finished" podID="3b4524da-e80b-4bd2-a116-061694417007" containerID="b2686e149913ab0d7eb8e1c1ab82711e8bc8d0f1e7c674ad1bb843e01690c119" exitCode=0 Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.605954 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-70b0-account-create-update-c8qbm" event={"ID":"3b4524da-e80b-4bd2-a116-061694417007","Type":"ContainerDied","Data":"b2686e149913ab0d7eb8e1c1ab82711e8bc8d0f1e7c674ad1bb843e01690c119"} Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.605974 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-70b0-account-create-update-c8qbm" event={"ID":"3b4524da-e80b-4bd2-a116-061694417007","Type":"ContainerStarted","Data":"a0fda54eb084c2cf19c1e6dcbc83a9e09d8417502f27c897188c3a798eb76994"} Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.608342 4593 generic.go:334] "Generic (PLEG): container finished" podID="fdb1fb5b-1dc7-487a-b49d-d542eef7af31" containerID="c00b7731a137cc5e16b524de8c2c6a1402d07e79205488315ad3920c71b523b5" exitCode=0 Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.608382 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-c4fzt" event={"ID":"fdb1fb5b-1dc7-487a-b49d-d542eef7af31","Type":"ContainerDied","Data":"c00b7731a137cc5e16b524de8c2c6a1402d07e79205488315ad3920c71b523b5"} Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.608398 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-c4fzt" event={"ID":"fdb1fb5b-1dc7-487a-b49d-d542eef7af31","Type":"ContainerStarted","Data":"61f5eeb49ae22b41c16de9e85095516b89b44d599286692b28762a74f7dca621"} Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.609852 4593 generic.go:334] "Generic (PLEG): container finished" podID="e2687b78-f425-4fae-9af8-7021f3e01e36" containerID="1146c75a258cb4ad7f71cc2e37d3a74813526e1b88d59d1880e58f1ae91dd7d1" exitCode=0 Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.610593 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cjzzm" event={"ID":"e2687b78-f425-4fae-9af8-7021f3e01e36","Type":"ContainerDied","Data":"1146c75a258cb4ad7f71cc2e37d3a74813526e1b88d59d1880e58f1ae91dd7d1"} Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.107820 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-70b0-account-create-update-c8qbm" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.255698 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cjzzm" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.262510 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-c4fzt" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.271897 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c3a7-account-create-update-9b49r" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.276299 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrt45\" (UniqueName: \"kubernetes.io/projected/3b4524da-e80b-4bd2-a116-061694417007-kube-api-access-zrt45\") pod \"3b4524da-e80b-4bd2-a116-061694417007\" (UID: \"3b4524da-e80b-4bd2-a116-061694417007\") " Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.276494 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b4524da-e80b-4bd2-a116-061694417007-operator-scripts\") pod \"3b4524da-e80b-4bd2-a116-061694417007\" (UID: \"3b4524da-e80b-4bd2-a116-061694417007\") " Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.277287 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b4524da-e80b-4bd2-a116-061694417007-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b4524da-e80b-4bd2-a116-061694417007" (UID: "3b4524da-e80b-4bd2-a116-061694417007"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.282689 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b4524da-e80b-4bd2-a116-061694417007-kube-api-access-zrt45" (OuterVolumeSpecName: "kube-api-access-zrt45") pod "3b4524da-e80b-4bd2-a116-061694417007" (UID: "3b4524da-e80b-4bd2-a116-061694417007"). InnerVolumeSpecName "kube-api-access-zrt45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.378097 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2eab48b-4545-4fa3-81f1-6247ebcf425e-operator-scripts\") pod \"f2eab48b-4545-4fa3-81f1-6247ebcf425e\" (UID: \"f2eab48b-4545-4fa3-81f1-6247ebcf425e\") " Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.378215 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdb1fb5b-1dc7-487a-b49d-d542eef7af31-operator-scripts\") pod \"fdb1fb5b-1dc7-487a-b49d-d542eef7af31\" (UID: \"fdb1fb5b-1dc7-487a-b49d-d542eef7af31\") " Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.378265 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlqs8\" (UniqueName: \"kubernetes.io/projected/f2eab48b-4545-4fa3-81f1-6247ebcf425e-kube-api-access-zlqs8\") pod \"f2eab48b-4545-4fa3-81f1-6247ebcf425e\" (UID: \"f2eab48b-4545-4fa3-81f1-6247ebcf425e\") " Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.378308 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spkhz\" (UniqueName: \"kubernetes.io/projected/e2687b78-f425-4fae-9af8-7021f3e01e36-kube-api-access-spkhz\") pod \"e2687b78-f425-4fae-9af8-7021f3e01e36\" (UID: \"e2687b78-f425-4fae-9af8-7021f3e01e36\") " Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.378341 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2687b78-f425-4fae-9af8-7021f3e01e36-operator-scripts\") pod \"e2687b78-f425-4fae-9af8-7021f3e01e36\" (UID: \"e2687b78-f425-4fae-9af8-7021f3e01e36\") " Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.378383 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb945\" (UniqueName: \"kubernetes.io/projected/fdb1fb5b-1dc7-487a-b49d-d542eef7af31-kube-api-access-qb945\") pod \"fdb1fb5b-1dc7-487a-b49d-d542eef7af31\" (UID: \"fdb1fb5b-1dc7-487a-b49d-d542eef7af31\") " Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.378831 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b4524da-e80b-4bd2-a116-061694417007-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.378853 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrt45\" (UniqueName: \"kubernetes.io/projected/3b4524da-e80b-4bd2-a116-061694417007-kube-api-access-zrt45\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.379012 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2eab48b-4545-4fa3-81f1-6247ebcf425e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f2eab48b-4545-4fa3-81f1-6247ebcf425e" (UID: "f2eab48b-4545-4fa3-81f1-6247ebcf425e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.379093 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdb1fb5b-1dc7-487a-b49d-d542eef7af31-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fdb1fb5b-1dc7-487a-b49d-d542eef7af31" (UID: "fdb1fb5b-1dc7-487a-b49d-d542eef7af31"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.379595 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2687b78-f425-4fae-9af8-7021f3e01e36-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e2687b78-f425-4fae-9af8-7021f3e01e36" (UID: "e2687b78-f425-4fae-9af8-7021f3e01e36"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.382167 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdb1fb5b-1dc7-487a-b49d-d542eef7af31-kube-api-access-qb945" (OuterVolumeSpecName: "kube-api-access-qb945") pod "fdb1fb5b-1dc7-487a-b49d-d542eef7af31" (UID: "fdb1fb5b-1dc7-487a-b49d-d542eef7af31"). InnerVolumeSpecName "kube-api-access-qb945". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.382213 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2eab48b-4545-4fa3-81f1-6247ebcf425e-kube-api-access-zlqs8" (OuterVolumeSpecName: "kube-api-access-zlqs8") pod "f2eab48b-4545-4fa3-81f1-6247ebcf425e" (UID: "f2eab48b-4545-4fa3-81f1-6247ebcf425e"). InnerVolumeSpecName "kube-api-access-zlqs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.382559 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2687b78-f425-4fae-9af8-7021f3e01e36-kube-api-access-spkhz" (OuterVolumeSpecName: "kube-api-access-spkhz") pod "e2687b78-f425-4fae-9af8-7021f3e01e36" (UID: "e2687b78-f425-4fae-9af8-7021f3e01e36"). InnerVolumeSpecName "kube-api-access-spkhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.481242 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlqs8\" (UniqueName: \"kubernetes.io/projected/f2eab48b-4545-4fa3-81f1-6247ebcf425e-kube-api-access-zlqs8\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.481280 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spkhz\" (UniqueName: \"kubernetes.io/projected/e2687b78-f425-4fae-9af8-7021f3e01e36-kube-api-access-spkhz\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.481293 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2687b78-f425-4fae-9af8-7021f3e01e36-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.481304 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb945\" (UniqueName: \"kubernetes.io/projected/fdb1fb5b-1dc7-487a-b49d-d542eef7af31-kube-api-access-qb945\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.481315 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2eab48b-4545-4fa3-81f1-6247ebcf425e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.481325 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdb1fb5b-1dc7-487a-b49d-d542eef7af31-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.626707 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-c4fzt" event={"ID":"fdb1fb5b-1dc7-487a-b49d-d542eef7af31","Type":"ContainerDied","Data":"61f5eeb49ae22b41c16de9e85095516b89b44d599286692b28762a74f7dca621"} Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.626758 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61f5eeb49ae22b41c16de9e85095516b89b44d599286692b28762a74f7dca621" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.626740 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-c4fzt" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.628913 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cjzzm" event={"ID":"e2687b78-f425-4fae-9af8-7021f3e01e36","Type":"ContainerDied","Data":"69543955059b6a02d7efbea367354349bec1818ede0d3acfb63fa9c3aa6c1a0a"} Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.628931 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cjzzm" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.628935 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69543955059b6a02d7efbea367354349bec1818ede0d3acfb63fa9c3aa6c1a0a" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.630901 4593 generic.go:334] "Generic (PLEG): container finished" podID="f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" containerID="44978dbad6338f76a863bda910ccc44233b86b74e07d252f43136dd31d7cd624" exitCode=0 Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.630961 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e","Type":"ContainerDied","Data":"44978dbad6338f76a863bda910ccc44233b86b74e07d252f43136dd31d7cd624"} Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.636058 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c3a7-account-create-update-9b49r" event={"ID":"f2eab48b-4545-4fa3-81f1-6247ebcf425e","Type":"ContainerDied","Data":"b82eec590832523688db0a6968a160c841e0e9d79bb0cf3ff1d1a27dc55df876"} Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.636115 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b82eec590832523688db0a6968a160c841e0e9d79bb0cf3ff1d1a27dc55df876" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.636186 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c3a7-account-create-update-9b49r" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.641149 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-70b0-account-create-update-c8qbm" event={"ID":"3b4524da-e80b-4bd2-a116-061694417007","Type":"ContainerDied","Data":"a0fda54eb084c2cf19c1e6dcbc83a9e09d8417502f27c897188c3a798eb76994"} Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.641201 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0fda54eb084c2cf19c1e6dcbc83a9e09d8417502f27c897188c3a798eb76994" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.641271 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-70b0-account-create-update-c8qbm" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.911357 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-87bhd"] Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.917499 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-87bhd"] Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.005798 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-sj2mz"] Jan 29 11:15:16 crc kubenswrapper[4593]: E0129 11:15:16.006112 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba99bea9-cf82-4eb7-8c7b-f171c534fc62" containerName="extract-content" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006127 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba99bea9-cf82-4eb7-8c7b-f171c534fc62" containerName="extract-content" Jan 29 11:15:16 crc kubenswrapper[4593]: E0129 11:15:16.006136 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdb1fb5b-1dc7-487a-b49d-d542eef7af31" containerName="mariadb-database-create" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006142 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdb1fb5b-1dc7-487a-b49d-d542eef7af31" containerName="mariadb-database-create" Jan 29 11:15:16 crc kubenswrapper[4593]: E0129 11:15:16.006152 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba99bea9-cf82-4eb7-8c7b-f171c534fc62" containerName="registry-server" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006159 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba99bea9-cf82-4eb7-8c7b-f171c534fc62" containerName="registry-server" Jan 29 11:15:16 crc kubenswrapper[4593]: E0129 11:15:16.006169 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2687b78-f425-4fae-9af8-7021f3e01e36" containerName="mariadb-database-create" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006174 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2687b78-f425-4fae-9af8-7021f3e01e36" containerName="mariadb-database-create" Jan 29 11:15:16 crc kubenswrapper[4593]: E0129 11:15:16.006185 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2eab48b-4545-4fa3-81f1-6247ebcf425e" containerName="mariadb-account-create-update" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006190 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2eab48b-4545-4fa3-81f1-6247ebcf425e" containerName="mariadb-account-create-update" Jan 29 11:15:16 crc kubenswrapper[4593]: E0129 11:15:16.006201 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8a9eb9e-18f2-4150-973c-2e7baaca3484" containerName="mariadb-account-create-update" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006207 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8a9eb9e-18f2-4150-973c-2e7baaca3484" containerName="mariadb-account-create-update" Jan 29 11:15:16 crc kubenswrapper[4593]: E0129 11:15:16.006219 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba99bea9-cf82-4eb7-8c7b-f171c534fc62" containerName="extract-utilities" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006224 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba99bea9-cf82-4eb7-8c7b-f171c534fc62" containerName="extract-utilities" Jan 29 11:15:16 crc kubenswrapper[4593]: E0129 11:15:16.006242 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba134367-9e72-466a-8aa3-0bda1deb7791" containerName="init" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006248 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba134367-9e72-466a-8aa3-0bda1deb7791" containerName="init" Jan 29 11:15:16 crc kubenswrapper[4593]: E0129 11:15:16.006258 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b4524da-e80b-4bd2-a116-061694417007" containerName="mariadb-account-create-update" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006266 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b4524da-e80b-4bd2-a116-061694417007" containerName="mariadb-account-create-update" Jan 29 11:15:16 crc kubenswrapper[4593]: E0129 11:15:16.006276 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba134367-9e72-466a-8aa3-0bda1deb7791" containerName="dnsmasq-dns" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006281 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba134367-9e72-466a-8aa3-0bda1deb7791" containerName="dnsmasq-dns" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006414 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2687b78-f425-4fae-9af8-7021f3e01e36" containerName="mariadb-database-create" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006423 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2eab48b-4545-4fa3-81f1-6247ebcf425e" containerName="mariadb-account-create-update" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006436 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba99bea9-cf82-4eb7-8c7b-f171c534fc62" containerName="registry-server" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006446 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba134367-9e72-466a-8aa3-0bda1deb7791" containerName="dnsmasq-dns" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006457 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8a9eb9e-18f2-4150-973c-2e7baaca3484" containerName="mariadb-account-create-update" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006466 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b4524da-e80b-4bd2-a116-061694417007" containerName="mariadb-account-create-update" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006474 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdb1fb5b-1dc7-487a-b49d-d542eef7af31" containerName="mariadb-database-create" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006971 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sj2mz" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.009082 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.021146 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sj2mz"] Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.090199 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/757d1461-f6a2-4062-be74-0abc5c507af2-operator-scripts\") pod \"root-account-create-update-sj2mz\" (UID: \"757d1461-f6a2-4062-be74-0abc5c507af2\") " pod="openstack/root-account-create-update-sj2mz" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.090259 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvb72\" (UniqueName: \"kubernetes.io/projected/757d1461-f6a2-4062-be74-0abc5c507af2-kube-api-access-rvb72\") pod \"root-account-create-update-sj2mz\" (UID: \"757d1461-f6a2-4062-be74-0abc5c507af2\") " pod="openstack/root-account-create-update-sj2mz" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.191325 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/757d1461-f6a2-4062-be74-0abc5c507af2-operator-scripts\") pod \"root-account-create-update-sj2mz\" (UID: \"757d1461-f6a2-4062-be74-0abc5c507af2\") " pod="openstack/root-account-create-update-sj2mz" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.191719 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvb72\" (UniqueName: \"kubernetes.io/projected/757d1461-f6a2-4062-be74-0abc5c507af2-kube-api-access-rvb72\") pod \"root-account-create-update-sj2mz\" (UID: \"757d1461-f6a2-4062-be74-0abc5c507af2\") " pod="openstack/root-account-create-update-sj2mz" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.192329 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/757d1461-f6a2-4062-be74-0abc5c507af2-operator-scripts\") pod \"root-account-create-update-sj2mz\" (UID: \"757d1461-f6a2-4062-be74-0abc5c507af2\") " pod="openstack/root-account-create-update-sj2mz" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.210862 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvb72\" (UniqueName: \"kubernetes.io/projected/757d1461-f6a2-4062-be74-0abc5c507af2-kube-api-access-rvb72\") pod \"root-account-create-update-sj2mz\" (UID: \"757d1461-f6a2-4062-be74-0abc5c507af2\") " pod="openstack/root-account-create-update-sj2mz" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.349262 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sj2mz" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.651232 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e","Type":"ContainerStarted","Data":"b4905f54e6b8f178fee9edd7eecf274cac9966dfb2e310545422ab1ab6e185c0"} Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.651681 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.652863 4593 generic.go:334] "Generic (PLEG): container finished" podID="db2ccd2b-429d-43e8-a674-fb5c2abb0754" containerID="6d261168add925568a421f585a6004956179df4396d9af74a221541b8db2b16f" exitCode=0 Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.652895 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"db2ccd2b-429d-43e8-a674-fb5c2abb0754","Type":"ContainerDied","Data":"6d261168add925568a421f585a6004956179df4396d9af74a221541b8db2b16f"} Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.711688 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.528156822 podStartE2EDuration="1m12.711673473s" podCreationTimestamp="2026-01-29 11:14:04 +0000 UTC" firstStartedPulling="2026-01-29 11:14:06.655265118 +0000 UTC m=+912.528299309" lastFinishedPulling="2026-01-29 11:14:40.838781769 +0000 UTC m=+946.711815960" observedRunningTime="2026-01-29 11:15:16.706177126 +0000 UTC m=+982.579211337" watchObservedRunningTime="2026-01-29 11:15:16.711673473 +0000 UTC m=+982.584707654" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.862693 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sj2mz"] Jan 29 11:15:16 crc kubenswrapper[4593]: W0129 11:15:16.875030 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod757d1461_f6a2_4062_be74_0abc5c507af2.slice/crio-78c78f5855371060dcd14be295bcb065c887a6427f808133dd98c7f1ca4d66dd WatchSource:0}: Error finding container 78c78f5855371060dcd14be295bcb065c887a6427f808133dd98c7f1ca4d66dd: Status 404 returned error can't find the container with id 78c78f5855371060dcd14be295bcb065c887a6427f808133dd98c7f1ca4d66dd Jan 29 11:15:17 crc kubenswrapper[4593]: I0129 11:15:17.083670 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8a9eb9e-18f2-4150-973c-2e7baaca3484" path="/var/lib/kubelet/pods/d8a9eb9e-18f2-4150-973c-2e7baaca3484/volumes" Jan 29 11:15:17 crc kubenswrapper[4593]: I0129 11:15:17.662859 4593 generic.go:334] "Generic (PLEG): container finished" podID="757d1461-f6a2-4062-be74-0abc5c507af2" containerID="b731ce61732546e5002e6093b39d4676cefa4ead9d8427f5427a357a3a10832e" exitCode=0 Jan 29 11:15:17 crc kubenswrapper[4593]: I0129 11:15:17.662899 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sj2mz" event={"ID":"757d1461-f6a2-4062-be74-0abc5c507af2","Type":"ContainerDied","Data":"b731ce61732546e5002e6093b39d4676cefa4ead9d8427f5427a357a3a10832e"} Jan 29 11:15:17 crc kubenswrapper[4593]: I0129 11:15:17.663346 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sj2mz" event={"ID":"757d1461-f6a2-4062-be74-0abc5c507af2","Type":"ContainerStarted","Data":"78c78f5855371060dcd14be295bcb065c887a6427f808133dd98c7f1ca4d66dd"} Jan 29 11:15:17 crc kubenswrapper[4593]: I0129 11:15:17.665081 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"db2ccd2b-429d-43e8-a674-fb5c2abb0754","Type":"ContainerStarted","Data":"a5f4f1ce8f769804b224118a6ef670e7ab165b034ee99bc6126f73ead60da112"} Jan 29 11:15:17 crc kubenswrapper[4593]: I0129 11:15:17.665403 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:15:17 crc kubenswrapper[4593]: I0129 11:15:17.727759 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.81662394 podStartE2EDuration="1m13.727740398s" podCreationTimestamp="2026-01-29 11:14:04 +0000 UTC" firstStartedPulling="2026-01-29 11:14:06.965415674 +0000 UTC m=+912.838449865" lastFinishedPulling="2026-01-29 11:14:41.876532132 +0000 UTC m=+947.749566323" observedRunningTime="2026-01-29 11:15:17.725374765 +0000 UTC m=+983.598408976" watchObservedRunningTime="2026-01-29 11:15:17.727740398 +0000 UTC m=+983.600774589" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.125794 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:18 crc kubenswrapper[4593]: E0129 11:15:18.126053 4593 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 11:15:18 crc kubenswrapper[4593]: E0129 11:15:18.126080 4593 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 11:15:18 crc kubenswrapper[4593]: E0129 11:15:18.127161 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift podName:307ad072-fdfc-4c55-8891-bc041d755b1a nodeName:}" failed. No retries permitted until 2026-01-29 11:15:34.12690255 +0000 UTC m=+999.999936741 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift") pod "swift-storage-0" (UID: "307ad072-fdfc-4c55-8891-bc041d755b1a") : configmap "swift-ring-files" not found Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.440607 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-pz4nl"] Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.441710 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pz4nl" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.457662 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-pz4nl"] Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.534557 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjlt2\" (UniqueName: \"kubernetes.io/projected/a84071c3-9564-41ef-b38f-fd40e1403fa8-kube-api-access-sjlt2\") pod \"keystone-db-create-pz4nl\" (UID: \"a84071c3-9564-41ef-b38f-fd40e1403fa8\") " pod="openstack/keystone-db-create-pz4nl" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.534946 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a84071c3-9564-41ef-b38f-fd40e1403fa8-operator-scripts\") pod \"keystone-db-create-pz4nl\" (UID: \"a84071c3-9564-41ef-b38f-fd40e1403fa8\") " pod="openstack/keystone-db-create-pz4nl" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.567051 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b99c-account-create-update-49grn"] Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.568222 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b99c-account-create-update-49grn" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.570560 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.594048 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b99c-account-create-update-49grn"] Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.636033 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjlt2\" (UniqueName: \"kubernetes.io/projected/a84071c3-9564-41ef-b38f-fd40e1403fa8-kube-api-access-sjlt2\") pod \"keystone-db-create-pz4nl\" (UID: \"a84071c3-9564-41ef-b38f-fd40e1403fa8\") " pod="openstack/keystone-db-create-pz4nl" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.636123 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a84071c3-9564-41ef-b38f-fd40e1403fa8-operator-scripts\") pod \"keystone-db-create-pz4nl\" (UID: \"a84071c3-9564-41ef-b38f-fd40e1403fa8\") " pod="openstack/keystone-db-create-pz4nl" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.636143 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn4dn\" (UniqueName: \"kubernetes.io/projected/12899826-03ea-4b37-b523-74946fd54dee-kube-api-access-gn4dn\") pod \"keystone-b99c-account-create-update-49grn\" (UID: \"12899826-03ea-4b37-b523-74946fd54dee\") " pod="openstack/keystone-b99c-account-create-update-49grn" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.636206 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12899826-03ea-4b37-b523-74946fd54dee-operator-scripts\") pod \"keystone-b99c-account-create-update-49grn\" (UID: \"12899826-03ea-4b37-b523-74946fd54dee\") " pod="openstack/keystone-b99c-account-create-update-49grn" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.636948 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a84071c3-9564-41ef-b38f-fd40e1403fa8-operator-scripts\") pod \"keystone-db-create-pz4nl\" (UID: \"a84071c3-9564-41ef-b38f-fd40e1403fa8\") " pod="openstack/keystone-db-create-pz4nl" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.653739 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjlt2\" (UniqueName: \"kubernetes.io/projected/a84071c3-9564-41ef-b38f-fd40e1403fa8-kube-api-access-sjlt2\") pod \"keystone-db-create-pz4nl\" (UID: \"a84071c3-9564-41ef-b38f-fd40e1403fa8\") " pod="openstack/keystone-db-create-pz4nl" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.737864 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12899826-03ea-4b37-b523-74946fd54dee-operator-scripts\") pod \"keystone-b99c-account-create-update-49grn\" (UID: \"12899826-03ea-4b37-b523-74946fd54dee\") " pod="openstack/keystone-b99c-account-create-update-49grn" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.738100 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn4dn\" (UniqueName: \"kubernetes.io/projected/12899826-03ea-4b37-b523-74946fd54dee-kube-api-access-gn4dn\") pod \"keystone-b99c-account-create-update-49grn\" (UID: \"12899826-03ea-4b37-b523-74946fd54dee\") " pod="openstack/keystone-b99c-account-create-update-49grn" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.739745 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12899826-03ea-4b37-b523-74946fd54dee-operator-scripts\") pod \"keystone-b99c-account-create-update-49grn\" (UID: \"12899826-03ea-4b37-b523-74946fd54dee\") " pod="openstack/keystone-b99c-account-create-update-49grn" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.759342 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pz4nl" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.761283 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn4dn\" (UniqueName: \"kubernetes.io/projected/12899826-03ea-4b37-b523-74946fd54dee-kube-api-access-gn4dn\") pod \"keystone-b99c-account-create-update-49grn\" (UID: \"12899826-03ea-4b37-b523-74946fd54dee\") " pod="openstack/keystone-b99c-account-create-update-49grn" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.887827 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b99c-account-create-update-49grn" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.142981 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sj2mz" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.166948 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.248340 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-pz4nl"] Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.248966 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvb72\" (UniqueName: \"kubernetes.io/projected/757d1461-f6a2-4062-be74-0abc5c507af2-kube-api-access-rvb72\") pod \"757d1461-f6a2-4062-be74-0abc5c507af2\" (UID: \"757d1461-f6a2-4062-be74-0abc5c507af2\") " Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.249145 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/757d1461-f6a2-4062-be74-0abc5c507af2-operator-scripts\") pod \"757d1461-f6a2-4062-be74-0abc5c507af2\" (UID: \"757d1461-f6a2-4062-be74-0abc5c507af2\") " Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.258233 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/757d1461-f6a2-4062-be74-0abc5c507af2-kube-api-access-rvb72" (OuterVolumeSpecName: "kube-api-access-rvb72") pod "757d1461-f6a2-4062-be74-0abc5c507af2" (UID: "757d1461-f6a2-4062-be74-0abc5c507af2"). InnerVolumeSpecName "kube-api-access-rvb72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.258563 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/757d1461-f6a2-4062-be74-0abc5c507af2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "757d1461-f6a2-4062-be74-0abc5c507af2" (UID: "757d1461-f6a2-4062-be74-0abc5c507af2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.354654 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvb72\" (UniqueName: \"kubernetes.io/projected/757d1461-f6a2-4062-be74-0abc5c507af2-kube-api-access-rvb72\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.354699 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/757d1461-f6a2-4062-be74-0abc5c507af2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.618817 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b99c-account-create-update-49grn"] Jan 29 11:15:19 crc kubenswrapper[4593]: W0129 11:15:19.629435 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12899826_03ea_4b37_b523_74946fd54dee.slice/crio-a7bf5d9ebc45e57b0ac3831f0b09f843f4fb95ed8073f4c501619971835c65ae WatchSource:0}: Error finding container a7bf5d9ebc45e57b0ac3831f0b09f843f4fb95ed8073f4c501619971835c65ae: Status 404 returned error can't find the container with id a7bf5d9ebc45e57b0ac3831f0b09f843f4fb95ed8073f4c501619971835c65ae Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.679122 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b99c-account-create-update-49grn" event={"ID":"12899826-03ea-4b37-b523-74946fd54dee","Type":"ContainerStarted","Data":"a7bf5d9ebc45e57b0ac3831f0b09f843f4fb95ed8073f4c501619971835c65ae"} Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.680797 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sj2mz" event={"ID":"757d1461-f6a2-4062-be74-0abc5c507af2","Type":"ContainerDied","Data":"78c78f5855371060dcd14be295bcb065c887a6427f808133dd98c7f1ca4d66dd"} Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.680838 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78c78f5855371060dcd14be295bcb065c887a6427f808133dd98c7f1ca4d66dd" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.680842 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sj2mz" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.700197 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pz4nl" event={"ID":"a84071c3-9564-41ef-b38f-fd40e1403fa8","Type":"ContainerStarted","Data":"2e1d0fad53de84474f89284c6a88dc3a72dfb695af32b237f2378dd7177ae8c5"} Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.700246 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pz4nl" event={"ID":"a84071c3-9564-41ef-b38f-fd40e1403fa8","Type":"ContainerStarted","Data":"7edbe171478325ecdd7fbb56c02ea4d91fc80a6acf8ee4d5d37e9f6cbb0c7f50"} Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.747752 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-pz4nl" podStartSLOduration=1.747730082 podStartE2EDuration="1.747730082s" podCreationTimestamp="2026-01-29 11:15:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:15:19.743622304 +0000 UTC m=+985.616656495" watchObservedRunningTime="2026-01-29 11:15:19.747730082 +0000 UTC m=+985.620764273" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.770353 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-db54x"] Jan 29 11:15:19 crc kubenswrapper[4593]: E0129 11:15:19.770916 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="757d1461-f6a2-4062-be74-0abc5c507af2" containerName="mariadb-account-create-update" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.770933 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="757d1461-f6a2-4062-be74-0abc5c507af2" containerName="mariadb-account-create-update" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.771107 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="757d1461-f6a2-4062-be74-0abc5c507af2" containerName="mariadb-account-create-update" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.771616 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-db54x" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.775103 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-lfv28" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.787583 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.789080 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-db54x"] Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.864784 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-db-sync-config-data\") pod \"glance-db-sync-db54x\" (UID: \"a6bbbb39-f79c-4647-976b-6225ac21e63b\") " pod="openstack/glance-db-sync-db54x" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.864839 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-combined-ca-bundle\") pod \"glance-db-sync-db54x\" (UID: \"a6bbbb39-f79c-4647-976b-6225ac21e63b\") " pod="openstack/glance-db-sync-db54x" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.864867 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-config-data\") pod \"glance-db-sync-db54x\" (UID: \"a6bbbb39-f79c-4647-976b-6225ac21e63b\") " pod="openstack/glance-db-sync-db54x" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.864961 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4lrf\" (UniqueName: \"kubernetes.io/projected/a6bbbb39-f79c-4647-976b-6225ac21e63b-kube-api-access-z4lrf\") pod \"glance-db-sync-db54x\" (UID: \"a6bbbb39-f79c-4647-976b-6225ac21e63b\") " pod="openstack/glance-db-sync-db54x" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.966918 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-db-sync-config-data\") pod \"glance-db-sync-db54x\" (UID: \"a6bbbb39-f79c-4647-976b-6225ac21e63b\") " pod="openstack/glance-db-sync-db54x" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.966987 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-combined-ca-bundle\") pod \"glance-db-sync-db54x\" (UID: \"a6bbbb39-f79c-4647-976b-6225ac21e63b\") " pod="openstack/glance-db-sync-db54x" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.967026 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-config-data\") pod \"glance-db-sync-db54x\" (UID: \"a6bbbb39-f79c-4647-976b-6225ac21e63b\") " pod="openstack/glance-db-sync-db54x" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.967149 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4lrf\" (UniqueName: \"kubernetes.io/projected/a6bbbb39-f79c-4647-976b-6225ac21e63b-kube-api-access-z4lrf\") pod \"glance-db-sync-db54x\" (UID: \"a6bbbb39-f79c-4647-976b-6225ac21e63b\") " pod="openstack/glance-db-sync-db54x" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.973421 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-db-sync-config-data\") pod \"glance-db-sync-db54x\" (UID: \"a6bbbb39-f79c-4647-976b-6225ac21e63b\") " pod="openstack/glance-db-sync-db54x" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.973724 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-config-data\") pod \"glance-db-sync-db54x\" (UID: \"a6bbbb39-f79c-4647-976b-6225ac21e63b\") " pod="openstack/glance-db-sync-db54x" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.984559 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-combined-ca-bundle\") pod \"glance-db-sync-db54x\" (UID: \"a6bbbb39-f79c-4647-976b-6225ac21e63b\") " pod="openstack/glance-db-sync-db54x" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.986453 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4lrf\" (UniqueName: \"kubernetes.io/projected/a6bbbb39-f79c-4647-976b-6225ac21e63b-kube-api-access-z4lrf\") pod \"glance-db-sync-db54x\" (UID: \"a6bbbb39-f79c-4647-976b-6225ac21e63b\") " pod="openstack/glance-db-sync-db54x" Jan 29 11:15:20 crc kubenswrapper[4593]: I0129 11:15:20.097092 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-db54x" Jan 29 11:15:20 crc kubenswrapper[4593]: I0129 11:15:20.718085 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b99c-account-create-update-49grn" event={"ID":"12899826-03ea-4b37-b523-74946fd54dee","Type":"ContainerDied","Data":"cfeb01d9eafd6f66b4b9db53f4dc0ef8f8de91ea87a6bf0dc6e1a2b4cfb6bce8"} Jan 29 11:15:20 crc kubenswrapper[4593]: I0129 11:15:20.719532 4593 generic.go:334] "Generic (PLEG): container finished" podID="12899826-03ea-4b37-b523-74946fd54dee" containerID="cfeb01d9eafd6f66b4b9db53f4dc0ef8f8de91ea87a6bf0dc6e1a2b4cfb6bce8" exitCode=0 Jan 29 11:15:20 crc kubenswrapper[4593]: I0129 11:15:20.721842 4593 generic.go:334] "Generic (PLEG): container finished" podID="4d1e7e96-e120-43f1-bff0-ea3d624e621b" containerID="9ea8033b0ead06e96b066f4d434b2b21ca12373b475b3c1f489d3e7beb1ea468" exitCode=0 Jan 29 11:15:20 crc kubenswrapper[4593]: I0129 11:15:20.721945 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jbnzf" event={"ID":"4d1e7e96-e120-43f1-bff0-ea3d624e621b","Type":"ContainerDied","Data":"9ea8033b0ead06e96b066f4d434b2b21ca12373b475b3c1f489d3e7beb1ea468"} Jan 29 11:15:20 crc kubenswrapper[4593]: I0129 11:15:20.724603 4593 generic.go:334] "Generic (PLEG): container finished" podID="a84071c3-9564-41ef-b38f-fd40e1403fa8" containerID="2e1d0fad53de84474f89284c6a88dc3a72dfb695af32b237f2378dd7177ae8c5" exitCode=0 Jan 29 11:15:20 crc kubenswrapper[4593]: I0129 11:15:20.724775 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pz4nl" event={"ID":"a84071c3-9564-41ef-b38f-fd40e1403fa8","Type":"ContainerDied","Data":"2e1d0fad53de84474f89284c6a88dc3a72dfb695af32b237f2378dd7177ae8c5"} Jan 29 11:15:20 crc kubenswrapper[4593]: I0129 11:15:20.742065 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-cc9qq" podUID="df5842a4-132b-4c71-a970-efe4f00a3827" containerName="ovn-controller" probeResult="failure" output=< Jan 29 11:15:20 crc kubenswrapper[4593]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 29 11:15:20 crc kubenswrapper[4593]: > Jan 29 11:15:20 crc kubenswrapper[4593]: W0129 11:15:20.789451 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6bbbb39_f79c_4647_976b_6225ac21e63b.slice/crio-75cc780a00b24f186282ea44e59ad68ac3ba85606bfd4c75fd53ab81ca596e59 WatchSource:0}: Error finding container 75cc780a00b24f186282ea44e59ad68ac3ba85606bfd4c75fd53ab81ca596e59: Status 404 returned error can't find the container with id 75cc780a00b24f186282ea44e59ad68ac3ba85606bfd4c75fd53ab81ca596e59 Jan 29 11:15:20 crc kubenswrapper[4593]: I0129 11:15:20.808253 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-db54x"] Jan 29 11:15:21 crc kubenswrapper[4593]: I0129 11:15:21.735650 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-db54x" event={"ID":"a6bbbb39-f79c-4647-976b-6225ac21e63b","Type":"ContainerStarted","Data":"75cc780a00b24f186282ea44e59ad68ac3ba85606bfd4c75fd53ab81ca596e59"} Jan 29 11:15:21 crc kubenswrapper[4593]: I0129 11:15:21.914367 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4q5nh"] Jan 29 11:15:21 crc kubenswrapper[4593]: I0129 11:15:21.917754 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:15:21 crc kubenswrapper[4593]: I0129 11:15:21.933314 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4q5nh"] Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.027091 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnh7f\" (UniqueName: \"kubernetes.io/projected/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-kube-api-access-mnh7f\") pod \"certified-operators-4q5nh\" (UID: \"fef7c251-cfb4-4d34-995d-1994b7a8dbe3\") " pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.027161 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-utilities\") pod \"certified-operators-4q5nh\" (UID: \"fef7c251-cfb4-4d34-995d-1994b7a8dbe3\") " pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.027365 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-catalog-content\") pod \"certified-operators-4q5nh\" (UID: \"fef7c251-cfb4-4d34-995d-1994b7a8dbe3\") " pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.129003 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnh7f\" (UniqueName: \"kubernetes.io/projected/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-kube-api-access-mnh7f\") pod \"certified-operators-4q5nh\" (UID: \"fef7c251-cfb4-4d34-995d-1994b7a8dbe3\") " pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.129426 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-utilities\") pod \"certified-operators-4q5nh\" (UID: \"fef7c251-cfb4-4d34-995d-1994b7a8dbe3\") " pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.129524 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-catalog-content\") pod \"certified-operators-4q5nh\" (UID: \"fef7c251-cfb4-4d34-995d-1994b7a8dbe3\") " pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.130251 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-catalog-content\") pod \"certified-operators-4q5nh\" (UID: \"fef7c251-cfb4-4d34-995d-1994b7a8dbe3\") " pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.130278 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-utilities\") pod \"certified-operators-4q5nh\" (UID: \"fef7c251-cfb4-4d34-995d-1994b7a8dbe3\") " pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.172293 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnh7f\" (UniqueName: \"kubernetes.io/projected/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-kube-api-access-mnh7f\") pod \"certified-operators-4q5nh\" (UID: \"fef7c251-cfb4-4d34-995d-1994b7a8dbe3\") " pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.246129 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.358964 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-sj2mz"] Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.373012 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-sj2mz"] Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.412137 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pz4nl" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.451589 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b99c-account-create-update-49grn" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.515417 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.539374 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn4dn\" (UniqueName: \"kubernetes.io/projected/12899826-03ea-4b37-b523-74946fd54dee-kube-api-access-gn4dn\") pod \"12899826-03ea-4b37-b523-74946fd54dee\" (UID: \"12899826-03ea-4b37-b523-74946fd54dee\") " Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.539457 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a84071c3-9564-41ef-b38f-fd40e1403fa8-operator-scripts\") pod \"a84071c3-9564-41ef-b38f-fd40e1403fa8\" (UID: \"a84071c3-9564-41ef-b38f-fd40e1403fa8\") " Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.539623 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjlt2\" (UniqueName: \"kubernetes.io/projected/a84071c3-9564-41ef-b38f-fd40e1403fa8-kube-api-access-sjlt2\") pod \"a84071c3-9564-41ef-b38f-fd40e1403fa8\" (UID: \"a84071c3-9564-41ef-b38f-fd40e1403fa8\") " Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.539700 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12899826-03ea-4b37-b523-74946fd54dee-operator-scripts\") pod \"12899826-03ea-4b37-b523-74946fd54dee\" (UID: \"12899826-03ea-4b37-b523-74946fd54dee\") " Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.541004 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12899826-03ea-4b37-b523-74946fd54dee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "12899826-03ea-4b37-b523-74946fd54dee" (UID: "12899826-03ea-4b37-b523-74946fd54dee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.542438 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a84071c3-9564-41ef-b38f-fd40e1403fa8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a84071c3-9564-41ef-b38f-fd40e1403fa8" (UID: "a84071c3-9564-41ef-b38f-fd40e1403fa8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.549725 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a84071c3-9564-41ef-b38f-fd40e1403fa8-kube-api-access-sjlt2" (OuterVolumeSpecName: "kube-api-access-sjlt2") pod "a84071c3-9564-41ef-b38f-fd40e1403fa8" (UID: "a84071c3-9564-41ef-b38f-fd40e1403fa8"). InnerVolumeSpecName "kube-api-access-sjlt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.551087 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12899826-03ea-4b37-b523-74946fd54dee-kube-api-access-gn4dn" (OuterVolumeSpecName: "kube-api-access-gn4dn") pod "12899826-03ea-4b37-b523-74946fd54dee" (UID: "12899826-03ea-4b37-b523-74946fd54dee"). InnerVolumeSpecName "kube-api-access-gn4dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.641162 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4d1e7e96-e120-43f1-bff0-ea3d624e621b-etc-swift\") pod \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.641245 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d1e7e96-e120-43f1-bff0-ea3d624e621b-scripts\") pod \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.641292 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8mgf\" (UniqueName: \"kubernetes.io/projected/4d1e7e96-e120-43f1-bff0-ea3d624e621b-kube-api-access-k8mgf\") pod \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.641392 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-combined-ca-bundle\") pod \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.641438 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-swiftconf\") pod \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.641489 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4d1e7e96-e120-43f1-bff0-ea3d624e621b-ring-data-devices\") pod \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.641520 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-dispersionconf\") pod \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.641973 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjlt2\" (UniqueName: \"kubernetes.io/projected/a84071c3-9564-41ef-b38f-fd40e1403fa8-kube-api-access-sjlt2\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.641997 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12899826-03ea-4b37-b523-74946fd54dee-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.642009 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn4dn\" (UniqueName: \"kubernetes.io/projected/12899826-03ea-4b37-b523-74946fd54dee-kube-api-access-gn4dn\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.642368 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a84071c3-9564-41ef-b38f-fd40e1403fa8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.643505 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d1e7e96-e120-43f1-bff0-ea3d624e621b-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "4d1e7e96-e120-43f1-bff0-ea3d624e621b" (UID: "4d1e7e96-e120-43f1-bff0-ea3d624e621b"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.644618 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d1e7e96-e120-43f1-bff0-ea3d624e621b-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "4d1e7e96-e120-43f1-bff0-ea3d624e621b" (UID: "4d1e7e96-e120-43f1-bff0-ea3d624e621b"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.646929 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d1e7e96-e120-43f1-bff0-ea3d624e621b-kube-api-access-k8mgf" (OuterVolumeSpecName: "kube-api-access-k8mgf") pod "4d1e7e96-e120-43f1-bff0-ea3d624e621b" (UID: "4d1e7e96-e120-43f1-bff0-ea3d624e621b"). InnerVolumeSpecName "kube-api-access-k8mgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.661457 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "4d1e7e96-e120-43f1-bff0-ea3d624e621b" (UID: "4d1e7e96-e120-43f1-bff0-ea3d624e621b"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.692387 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "4d1e7e96-e120-43f1-bff0-ea3d624e621b" (UID: "4d1e7e96-e120-43f1-bff0-ea3d624e621b"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.704666 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d1e7e96-e120-43f1-bff0-ea3d624e621b" (UID: "4d1e7e96-e120-43f1-bff0-ea3d624e621b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.721909 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d1e7e96-e120-43f1-bff0-ea3d624e621b-scripts" (OuterVolumeSpecName: "scripts") pod "4d1e7e96-e120-43f1-bff0-ea3d624e621b" (UID: "4d1e7e96-e120-43f1-bff0-ea3d624e621b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.743247 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b99c-account-create-update-49grn" event={"ID":"12899826-03ea-4b37-b523-74946fd54dee","Type":"ContainerDied","Data":"a7bf5d9ebc45e57b0ac3831f0b09f843f4fb95ed8073f4c501619971835c65ae"} Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.743288 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7bf5d9ebc45e57b0ac3831f0b09f843f4fb95ed8073f4c501619971835c65ae" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.743363 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b99c-account-create-update-49grn" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.744235 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8mgf\" (UniqueName: \"kubernetes.io/projected/4d1e7e96-e120-43f1-bff0-ea3d624e621b-kube-api-access-k8mgf\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.744256 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.744266 4593 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.744276 4593 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4d1e7e96-e120-43f1-bff0-ea3d624e621b-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.744284 4593 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.744292 4593 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4d1e7e96-e120-43f1-bff0-ea3d624e621b-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.744300 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d1e7e96-e120-43f1-bff0-ea3d624e621b-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.745765 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jbnzf" event={"ID":"4d1e7e96-e120-43f1-bff0-ea3d624e621b","Type":"ContainerDied","Data":"d77f0fd952398dea26e9f4a4bd94e337070014de0b7d5f082920e95b0dabccb6"} Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.745788 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d77f0fd952398dea26e9f4a4bd94e337070014de0b7d5f082920e95b0dabccb6" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.745820 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.755495 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pz4nl" event={"ID":"a84071c3-9564-41ef-b38f-fd40e1403fa8","Type":"ContainerDied","Data":"7edbe171478325ecdd7fbb56c02ea4d91fc80a6acf8ee4d5d37e9f6cbb0c7f50"} Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.755541 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7edbe171478325ecdd7fbb56c02ea4d91fc80a6acf8ee4d5d37e9f6cbb0c7f50" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.755600 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pz4nl" Jan 29 11:15:23 crc kubenswrapper[4593]: I0129 11:15:23.085088 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="757d1461-f6a2-4062-be74-0abc5c507af2" path="/var/lib/kubelet/pods/757d1461-f6a2-4062-be74-0abc5c507af2/volumes" Jan 29 11:15:23 crc kubenswrapper[4593]: I0129 11:15:23.127204 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4q5nh"] Jan 29 11:15:23 crc kubenswrapper[4593]: I0129 11:15:23.770702 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4q5nh" event={"ID":"fef7c251-cfb4-4d34-995d-1994b7a8dbe3","Type":"ContainerStarted","Data":"78dbfe42e92421682419cdaea165d73392eb4f589d0fece85d9b2c89989dd32e"} Jan 29 11:15:24 crc kubenswrapper[4593]: I0129 11:15:24.785268 4593 generic.go:334] "Generic (PLEG): container finished" podID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" containerID="c6e6f1ac55c53b64f5a8d09aab84fcbf98dc6146a8ab819b2f4a3c9dfdc9a62a" exitCode=0 Jan 29 11:15:24 crc kubenswrapper[4593]: I0129 11:15:24.785453 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4q5nh" event={"ID":"fef7c251-cfb4-4d34-995d-1994b7a8dbe3","Type":"ContainerDied","Data":"c6e6f1ac55c53b64f5a8d09aab84fcbf98dc6146a8ab819b2f4a3c9dfdc9a62a"} Jan 29 11:15:25 crc kubenswrapper[4593]: I0129 11:15:25.735667 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-cc9qq" podUID="df5842a4-132b-4c71-a970-efe4f00a3827" containerName="ovn-controller" probeResult="failure" output=< Jan 29 11:15:25 crc kubenswrapper[4593]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 29 11:15:25 crc kubenswrapper[4593]: > Jan 29 11:15:25 crc kubenswrapper[4593]: I0129 11:15:25.801564 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:15:25 crc kubenswrapper[4593]: I0129 11:15:25.816817 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.94:5671: connect: connection refused" Jan 29 11:15:25 crc kubenswrapper[4593]: I0129 11:15:25.837338 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.077502 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-cc9qq-config-tbd2h"] Jan 29 11:15:26 crc kubenswrapper[4593]: E0129 11:15:26.077885 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d1e7e96-e120-43f1-bff0-ea3d624e621b" containerName="swift-ring-rebalance" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.077902 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d1e7e96-e120-43f1-bff0-ea3d624e621b" containerName="swift-ring-rebalance" Jan 29 11:15:26 crc kubenswrapper[4593]: E0129 11:15:26.077918 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a84071c3-9564-41ef-b38f-fd40e1403fa8" containerName="mariadb-database-create" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.077925 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="a84071c3-9564-41ef-b38f-fd40e1403fa8" containerName="mariadb-database-create" Jan 29 11:15:26 crc kubenswrapper[4593]: E0129 11:15:26.077937 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12899826-03ea-4b37-b523-74946fd54dee" containerName="mariadb-account-create-update" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.077944 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="12899826-03ea-4b37-b523-74946fd54dee" containerName="mariadb-account-create-update" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.078081 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d1e7e96-e120-43f1-bff0-ea3d624e621b" containerName="swift-ring-rebalance" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.078093 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="a84071c3-9564-41ef-b38f-fd40e1403fa8" containerName="mariadb-database-create" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.078106 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="12899826-03ea-4b37-b523-74946fd54dee" containerName="mariadb-account-create-update" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.080010 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.090706 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cc9qq-config-tbd2h"] Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.092947 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.218882 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-run\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.218931 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m28jc\" (UniqueName: \"kubernetes.io/projected/6405a039-ae6d-4255-891c-ef8452e19df3-kube-api-access-m28jc\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.219033 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6405a039-ae6d-4255-891c-ef8452e19df3-additional-scripts\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.219130 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-run-ovn\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.219155 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6405a039-ae6d-4255-891c-ef8452e19df3-scripts\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.219200 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-log-ovn\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.264042 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="db2ccd2b-429d-43e8-a674-fb5c2abb0754" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.95:5671: connect: connection refused" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.321139 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6405a039-ae6d-4255-891c-ef8452e19df3-additional-scripts\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.321963 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6405a039-ae6d-4255-891c-ef8452e19df3-additional-scripts\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.322141 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-run-ovn\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.322169 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6405a039-ae6d-4255-891c-ef8452e19df3-scripts\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.322218 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-log-ovn\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.322288 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-run\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.322314 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m28jc\" (UniqueName: \"kubernetes.io/projected/6405a039-ae6d-4255-891c-ef8452e19df3-kube-api-access-m28jc\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.322811 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-log-ovn\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.322863 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-run-ovn\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.322862 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-run\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.324626 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6405a039-ae6d-4255-891c-ef8452e19df3-scripts\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.354979 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m28jc\" (UniqueName: \"kubernetes.io/projected/6405a039-ae6d-4255-891c-ef8452e19df3-kube-api-access-m28jc\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.397227 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.810716 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4q5nh" event={"ID":"fef7c251-cfb4-4d34-995d-1994b7a8dbe3","Type":"ContainerStarted","Data":"26d8db7acae03adbd8a96b95ffa16e626d4c4da2a6d0ab63963a1ab8a16e14e7"} Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.902824 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cc9qq-config-tbd2h"] Jan 29 11:15:26 crc kubenswrapper[4593]: W0129 11:15:26.912994 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6405a039_ae6d_4255_891c_ef8452e19df3.slice/crio-a351d8b59f6d9bf56172509fb205e45b06b6feb5fd43ab7a09b461eb2ac5e62e WatchSource:0}: Error finding container a351d8b59f6d9bf56172509fb205e45b06b6feb5fd43ab7a09b461eb2ac5e62e: Status 404 returned error can't find the container with id a351d8b59f6d9bf56172509fb205e45b06b6feb5fd43ab7a09b461eb2ac5e62e Jan 29 11:15:27 crc kubenswrapper[4593]: I0129 11:15:27.372499 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-625ls"] Jan 29 11:15:27 crc kubenswrapper[4593]: I0129 11:15:27.374013 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-625ls" Jan 29 11:15:27 crc kubenswrapper[4593]: I0129 11:15:27.376583 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 29 11:15:27 crc kubenswrapper[4593]: I0129 11:15:27.380626 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-625ls"] Jan 29 11:15:27 crc kubenswrapper[4593]: I0129 11:15:27.442551 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mckcx\" (UniqueName: \"kubernetes.io/projected/56d59502-9350-4842-bd01-35d55f0b47fa-kube-api-access-mckcx\") pod \"root-account-create-update-625ls\" (UID: \"56d59502-9350-4842-bd01-35d55f0b47fa\") " pod="openstack/root-account-create-update-625ls" Jan 29 11:15:27 crc kubenswrapper[4593]: I0129 11:15:27.442749 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56d59502-9350-4842-bd01-35d55f0b47fa-operator-scripts\") pod \"root-account-create-update-625ls\" (UID: \"56d59502-9350-4842-bd01-35d55f0b47fa\") " pod="openstack/root-account-create-update-625ls" Jan 29 11:15:27 crc kubenswrapper[4593]: I0129 11:15:27.544339 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mckcx\" (UniqueName: \"kubernetes.io/projected/56d59502-9350-4842-bd01-35d55f0b47fa-kube-api-access-mckcx\") pod \"root-account-create-update-625ls\" (UID: \"56d59502-9350-4842-bd01-35d55f0b47fa\") " pod="openstack/root-account-create-update-625ls" Jan 29 11:15:27 crc kubenswrapper[4593]: I0129 11:15:27.544472 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56d59502-9350-4842-bd01-35d55f0b47fa-operator-scripts\") pod \"root-account-create-update-625ls\" (UID: \"56d59502-9350-4842-bd01-35d55f0b47fa\") " pod="openstack/root-account-create-update-625ls" Jan 29 11:15:27 crc kubenswrapper[4593]: I0129 11:15:27.545406 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56d59502-9350-4842-bd01-35d55f0b47fa-operator-scripts\") pod \"root-account-create-update-625ls\" (UID: \"56d59502-9350-4842-bd01-35d55f0b47fa\") " pod="openstack/root-account-create-update-625ls" Jan 29 11:15:27 crc kubenswrapper[4593]: I0129 11:15:27.565286 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mckcx\" (UniqueName: \"kubernetes.io/projected/56d59502-9350-4842-bd01-35d55f0b47fa-kube-api-access-mckcx\") pod \"root-account-create-update-625ls\" (UID: \"56d59502-9350-4842-bd01-35d55f0b47fa\") " pod="openstack/root-account-create-update-625ls" Jan 29 11:15:27 crc kubenswrapper[4593]: I0129 11:15:27.693429 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-625ls" Jan 29 11:15:27 crc kubenswrapper[4593]: I0129 11:15:27.826194 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cc9qq-config-tbd2h" event={"ID":"6405a039-ae6d-4255-891c-ef8452e19df3","Type":"ContainerStarted","Data":"bb01aea62e7547286b44d9743a913549a411ace53ed9b60fd827a2aca107007a"} Jan 29 11:15:27 crc kubenswrapper[4593]: I0129 11:15:27.826504 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cc9qq-config-tbd2h" event={"ID":"6405a039-ae6d-4255-891c-ef8452e19df3","Type":"ContainerStarted","Data":"a351d8b59f6d9bf56172509fb205e45b06b6feb5fd43ab7a09b461eb2ac5e62e"} Jan 29 11:15:27 crc kubenswrapper[4593]: I0129 11:15:27.856045 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-cc9qq-config-tbd2h" podStartSLOduration=1.856019178 podStartE2EDuration="1.856019178s" podCreationTimestamp="2026-01-29 11:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:15:27.842439886 +0000 UTC m=+993.715474077" watchObservedRunningTime="2026-01-29 11:15:27.856019178 +0000 UTC m=+993.729053369" Jan 29 11:15:28 crc kubenswrapper[4593]: I0129 11:15:28.191813 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-625ls"] Jan 29 11:15:28 crc kubenswrapper[4593]: I0129 11:15:28.835495 4593 generic.go:334] "Generic (PLEG): container finished" podID="6405a039-ae6d-4255-891c-ef8452e19df3" containerID="bb01aea62e7547286b44d9743a913549a411ace53ed9b60fd827a2aca107007a" exitCode=0 Jan 29 11:15:28 crc kubenswrapper[4593]: I0129 11:15:28.835580 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cc9qq-config-tbd2h" event={"ID":"6405a039-ae6d-4255-891c-ef8452e19df3","Type":"ContainerDied","Data":"bb01aea62e7547286b44d9743a913549a411ace53ed9b60fd827a2aca107007a"} Jan 29 11:15:28 crc kubenswrapper[4593]: I0129 11:15:28.837460 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-625ls" event={"ID":"56d59502-9350-4842-bd01-35d55f0b47fa","Type":"ContainerStarted","Data":"0fe30972eae6fe027a2826fd5f842e093abe225a13c6181792f977be2efdbfe1"} Jan 29 11:15:29 crc kubenswrapper[4593]: I0129 11:15:29.715751 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k4l8n"] Jan 29 11:15:29 crc kubenswrapper[4593]: I0129 11:15:29.717773 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:15:29 crc kubenswrapper[4593]: I0129 11:15:29.727193 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k4l8n"] Jan 29 11:15:29 crc kubenswrapper[4593]: I0129 11:15:29.816153 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pvlg\" (UniqueName: \"kubernetes.io/projected/9194cbfb-27b9-47e8-90eb-64b9391d0b07-kube-api-access-9pvlg\") pod \"redhat-operators-k4l8n\" (UID: \"9194cbfb-27b9-47e8-90eb-64b9391d0b07\") " pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:15:29 crc kubenswrapper[4593]: I0129 11:15:29.816193 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9194cbfb-27b9-47e8-90eb-64b9391d0b07-utilities\") pod \"redhat-operators-k4l8n\" (UID: \"9194cbfb-27b9-47e8-90eb-64b9391d0b07\") " pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:15:29 crc kubenswrapper[4593]: I0129 11:15:29.816240 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9194cbfb-27b9-47e8-90eb-64b9391d0b07-catalog-content\") pod \"redhat-operators-k4l8n\" (UID: \"9194cbfb-27b9-47e8-90eb-64b9391d0b07\") " pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:15:29 crc kubenswrapper[4593]: I0129 11:15:29.853902 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-625ls" event={"ID":"56d59502-9350-4842-bd01-35d55f0b47fa","Type":"ContainerStarted","Data":"18ec4b46dd2b143a4699e4f0f9fb21bf0908d4fea6194256ca5d46a4b1e3154b"} Jan 29 11:15:29 crc kubenswrapper[4593]: I0129 11:15:29.874216 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-625ls" podStartSLOduration=2.874198575 podStartE2EDuration="2.874198575s" podCreationTimestamp="2026-01-29 11:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:15:29.873492166 +0000 UTC m=+995.746526357" watchObservedRunningTime="2026-01-29 11:15:29.874198575 +0000 UTC m=+995.747232766" Jan 29 11:15:29 crc kubenswrapper[4593]: I0129 11:15:29.918135 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pvlg\" (UniqueName: \"kubernetes.io/projected/9194cbfb-27b9-47e8-90eb-64b9391d0b07-kube-api-access-9pvlg\") pod \"redhat-operators-k4l8n\" (UID: \"9194cbfb-27b9-47e8-90eb-64b9391d0b07\") " pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:15:29 crc kubenswrapper[4593]: I0129 11:15:29.918178 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9194cbfb-27b9-47e8-90eb-64b9391d0b07-utilities\") pod \"redhat-operators-k4l8n\" (UID: \"9194cbfb-27b9-47e8-90eb-64b9391d0b07\") " pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:15:29 crc kubenswrapper[4593]: I0129 11:15:29.918221 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9194cbfb-27b9-47e8-90eb-64b9391d0b07-catalog-content\") pod \"redhat-operators-k4l8n\" (UID: \"9194cbfb-27b9-47e8-90eb-64b9391d0b07\") " pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:15:29 crc kubenswrapper[4593]: I0129 11:15:29.918838 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9194cbfb-27b9-47e8-90eb-64b9391d0b07-catalog-content\") pod \"redhat-operators-k4l8n\" (UID: \"9194cbfb-27b9-47e8-90eb-64b9391d0b07\") " pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:15:29 crc kubenswrapper[4593]: I0129 11:15:29.919535 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9194cbfb-27b9-47e8-90eb-64b9391d0b07-utilities\") pod \"redhat-operators-k4l8n\" (UID: \"9194cbfb-27b9-47e8-90eb-64b9391d0b07\") " pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:15:29 crc kubenswrapper[4593]: I0129 11:15:29.941199 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pvlg\" (UniqueName: \"kubernetes.io/projected/9194cbfb-27b9-47e8-90eb-64b9391d0b07-kube-api-access-9pvlg\") pod \"redhat-operators-k4l8n\" (UID: \"9194cbfb-27b9-47e8-90eb-64b9391d0b07\") " pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:15:30 crc kubenswrapper[4593]: I0129 11:15:30.053012 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:15:30 crc kubenswrapper[4593]: I0129 11:15:30.725499 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-cc9qq" Jan 29 11:15:30 crc kubenswrapper[4593]: I0129 11:15:30.862807 4593 generic.go:334] "Generic (PLEG): container finished" podID="56d59502-9350-4842-bd01-35d55f0b47fa" containerID="18ec4b46dd2b143a4699e4f0f9fb21bf0908d4fea6194256ca5d46a4b1e3154b" exitCode=0 Jan 29 11:15:30 crc kubenswrapper[4593]: I0129 11:15:30.862847 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-625ls" event={"ID":"56d59502-9350-4842-bd01-35d55f0b47fa","Type":"ContainerDied","Data":"18ec4b46dd2b143a4699e4f0f9fb21bf0908d4fea6194256ca5d46a4b1e3154b"} Jan 29 11:15:31 crc kubenswrapper[4593]: I0129 11:15:31.871539 4593 generic.go:334] "Generic (PLEG): container finished" podID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" containerID="26d8db7acae03adbd8a96b95ffa16e626d4c4da2a6d0ab63963a1ab8a16e14e7" exitCode=0 Jan 29 11:15:31 crc kubenswrapper[4593]: I0129 11:15:31.871791 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4q5nh" event={"ID":"fef7c251-cfb4-4d34-995d-1994b7a8dbe3","Type":"ContainerDied","Data":"26d8db7acae03adbd8a96b95ffa16e626d4c4da2a6d0ab63963a1ab8a16e14e7"} Jan 29 11:15:33 crc kubenswrapper[4593]: I0129 11:15:33.946057 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:15:33 crc kubenswrapper[4593]: I0129 11:15:33.946864 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:15:34 crc kubenswrapper[4593]: I0129 11:15:34.195250 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:34 crc kubenswrapper[4593]: I0129 11:15:34.202074 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:34 crc kubenswrapper[4593]: I0129 11:15:34.388206 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 11:15:35 crc kubenswrapper[4593]: I0129 11:15:35.819455 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.136033 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-vdz52"] Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.149258 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vdz52" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.188451 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-vdz52"] Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.240960 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwlg4\" (UniqueName: \"kubernetes.io/projected/52b59817-1d9d-431d-8055-cf98107b89a2-kube-api-access-lwlg4\") pod \"barbican-db-create-vdz52\" (UID: \"52b59817-1d9d-431d-8055-cf98107b89a2\") " pod="openstack/barbican-db-create-vdz52" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.241098 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52b59817-1d9d-431d-8055-cf98107b89a2-operator-scripts\") pod \"barbican-db-create-vdz52\" (UID: \"52b59817-1d9d-431d-8055-cf98107b89a2\") " pod="openstack/barbican-db-create-vdz52" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.265567 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.343046 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwlg4\" (UniqueName: \"kubernetes.io/projected/52b59817-1d9d-431d-8055-cf98107b89a2-kube-api-access-lwlg4\") pod \"barbican-db-create-vdz52\" (UID: \"52b59817-1d9d-431d-8055-cf98107b89a2\") " pod="openstack/barbican-db-create-vdz52" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.343165 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52b59817-1d9d-431d-8055-cf98107b89a2-operator-scripts\") pod \"barbican-db-create-vdz52\" (UID: \"52b59817-1d9d-431d-8055-cf98107b89a2\") " pod="openstack/barbican-db-create-vdz52" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.343834 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52b59817-1d9d-431d-8055-cf98107b89a2-operator-scripts\") pod \"barbican-db-create-vdz52\" (UID: \"52b59817-1d9d-431d-8055-cf98107b89a2\") " pod="openstack/barbican-db-create-vdz52" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.366107 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-0486-account-create-update-f9r68"] Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.370382 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0486-account-create-update-f9r68" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.376145 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.377205 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwlg4\" (UniqueName: \"kubernetes.io/projected/52b59817-1d9d-431d-8055-cf98107b89a2-kube-api-access-lwlg4\") pod \"barbican-db-create-vdz52\" (UID: \"52b59817-1d9d-431d-8055-cf98107b89a2\") " pod="openstack/barbican-db-create-vdz52" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.386616 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0486-account-create-update-f9r68"] Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.493458 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vdz52" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.525131 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-9hskn"] Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.526364 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9hskn" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.551553 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d46f220-cb33-4768-91f5-c59e98c41af4-operator-scripts\") pod \"barbican-0486-account-create-update-f9r68\" (UID: \"6d46f220-cb33-4768-91f5-c59e98c41af4\") " pod="openstack/barbican-0486-account-create-update-f9r68" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.551621 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l48ll\" (UniqueName: \"kubernetes.io/projected/6d46f220-cb33-4768-91f5-c59e98c41af4-kube-api-access-l48ll\") pod \"barbican-0486-account-create-update-f9r68\" (UID: \"6d46f220-cb33-4768-91f5-c59e98c41af4\") " pod="openstack/barbican-0486-account-create-update-f9r68" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.584187 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-9hskn"] Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.639238 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-wzm6z"] Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.640427 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wzm6z" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.643752 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.645709 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.645897 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.646029 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-h76tz" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.653960 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l48ll\" (UniqueName: \"kubernetes.io/projected/6d46f220-cb33-4768-91f5-c59e98c41af4-kube-api-access-l48ll\") pod \"barbican-0486-account-create-update-f9r68\" (UID: \"6d46f220-cb33-4768-91f5-c59e98c41af4\") " pod="openstack/barbican-0486-account-create-update-f9r68" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.654072 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nxmp\" (UniqueName: \"kubernetes.io/projected/7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe-kube-api-access-9nxmp\") pod \"cinder-db-create-9hskn\" (UID: \"7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe\") " pod="openstack/cinder-db-create-9hskn" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.654113 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe-operator-scripts\") pod \"cinder-db-create-9hskn\" (UID: \"7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe\") " pod="openstack/cinder-db-create-9hskn" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.654155 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d46f220-cb33-4768-91f5-c59e98c41af4-operator-scripts\") pod \"barbican-0486-account-create-update-f9r68\" (UID: \"6d46f220-cb33-4768-91f5-c59e98c41af4\") " pod="openstack/barbican-0486-account-create-update-f9r68" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.654793 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d46f220-cb33-4768-91f5-c59e98c41af4-operator-scripts\") pod \"barbican-0486-account-create-update-f9r68\" (UID: \"6d46f220-cb33-4768-91f5-c59e98c41af4\") " pod="openstack/barbican-0486-account-create-update-f9r68" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.684973 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-wzm6z"] Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.692491 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l48ll\" (UniqueName: \"kubernetes.io/projected/6d46f220-cb33-4768-91f5-c59e98c41af4-kube-api-access-l48ll\") pod \"barbican-0486-account-create-update-f9r68\" (UID: \"6d46f220-cb33-4768-91f5-c59e98c41af4\") " pod="openstack/barbican-0486-account-create-update-f9r68" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.695456 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-4c8a-account-create-update-psrpm"] Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.696488 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4c8a-account-create-update-psrpm" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.715080 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.752380 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0486-account-create-update-f9r68" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.757951 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c0b4a25-540c-47dd-96fb-fdc6872721b5-config-data\") pod \"keystone-db-sync-wzm6z\" (UID: \"9c0b4a25-540c-47dd-96fb-fdc6872721b5\") " pod="openstack/keystone-db-sync-wzm6z" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.758016 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nxmp\" (UniqueName: \"kubernetes.io/projected/7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe-kube-api-access-9nxmp\") pod \"cinder-db-create-9hskn\" (UID: \"7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe\") " pod="openstack/cinder-db-create-9hskn" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.758049 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c0b4a25-540c-47dd-96fb-fdc6872721b5-combined-ca-bundle\") pod \"keystone-db-sync-wzm6z\" (UID: \"9c0b4a25-540c-47dd-96fb-fdc6872721b5\") " pod="openstack/keystone-db-sync-wzm6z" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.758066 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnrhr\" (UniqueName: \"kubernetes.io/projected/9c0b4a25-540c-47dd-96fb-fdc6872721b5-kube-api-access-gnrhr\") pod \"keystone-db-sync-wzm6z\" (UID: \"9c0b4a25-540c-47dd-96fb-fdc6872721b5\") " pod="openstack/keystone-db-sync-wzm6z" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.758089 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe-operator-scripts\") pod \"cinder-db-create-9hskn\" (UID: \"7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe\") " pod="openstack/cinder-db-create-9hskn" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.758812 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe-operator-scripts\") pod \"cinder-db-create-9hskn\" (UID: \"7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe\") " pod="openstack/cinder-db-create-9hskn" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.783777 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-4c8a-account-create-update-psrpm"] Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.808721 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-jgv94"] Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.810429 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nxmp\" (UniqueName: \"kubernetes.io/projected/7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe-kube-api-access-9nxmp\") pod \"cinder-db-create-9hskn\" (UID: \"7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe\") " pod="openstack/cinder-db-create-9hskn" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.811951 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jgv94" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.837837 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jgv94"] Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.840988 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9hskn" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.859810 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c0b4a25-540c-47dd-96fb-fdc6872721b5-config-data\") pod \"keystone-db-sync-wzm6z\" (UID: \"9c0b4a25-540c-47dd-96fb-fdc6872721b5\") " pod="openstack/keystone-db-sync-wzm6z" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.859864 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c0b4a25-540c-47dd-96fb-fdc6872721b5-combined-ca-bundle\") pod \"keystone-db-sync-wzm6z\" (UID: \"9c0b4a25-540c-47dd-96fb-fdc6872721b5\") " pod="openstack/keystone-db-sync-wzm6z" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.859882 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnrhr\" (UniqueName: \"kubernetes.io/projected/9c0b4a25-540c-47dd-96fb-fdc6872721b5-kube-api-access-gnrhr\") pod \"keystone-db-sync-wzm6z\" (UID: \"9c0b4a25-540c-47dd-96fb-fdc6872721b5\") " pod="openstack/keystone-db-sync-wzm6z" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.859939 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbee97db-a8f1-43e0-ac0b-ec58529b2c03-operator-scripts\") pod \"cinder-4c8a-account-create-update-psrpm\" (UID: \"fbee97db-a8f1-43e0-ac0b-ec58529b2c03\") " pod="openstack/cinder-4c8a-account-create-update-psrpm" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.859964 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkbv8\" (UniqueName: \"kubernetes.io/projected/fbee97db-a8f1-43e0-ac0b-ec58529b2c03-kube-api-access-xkbv8\") pod \"cinder-4c8a-account-create-update-psrpm\" (UID: \"fbee97db-a8f1-43e0-ac0b-ec58529b2c03\") " pod="openstack/cinder-4c8a-account-create-update-psrpm" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.870515 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c0b4a25-540c-47dd-96fb-fdc6872721b5-config-data\") pod \"keystone-db-sync-wzm6z\" (UID: \"9c0b4a25-540c-47dd-96fb-fdc6872721b5\") " pod="openstack/keystone-db-sync-wzm6z" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.878872 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c0b4a25-540c-47dd-96fb-fdc6872721b5-combined-ca-bundle\") pod \"keystone-db-sync-wzm6z\" (UID: \"9c0b4a25-540c-47dd-96fb-fdc6872721b5\") " pod="openstack/keystone-db-sync-wzm6z" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.889419 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnrhr\" (UniqueName: \"kubernetes.io/projected/9c0b4a25-540c-47dd-96fb-fdc6872721b5-kube-api-access-gnrhr\") pod \"keystone-db-sync-wzm6z\" (UID: \"9c0b4a25-540c-47dd-96fb-fdc6872721b5\") " pod="openstack/keystone-db-sync-wzm6z" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.960257 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-140c-account-create-update-csqgp"] Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.961184 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/115d89c5-8038-4b55-9f1d-d0f169ee0b53-operator-scripts\") pod \"neutron-db-create-jgv94\" (UID: \"115d89c5-8038-4b55-9f1d-d0f169ee0b53\") " pod="openstack/neutron-db-create-jgv94" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.961250 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbee97db-a8f1-43e0-ac0b-ec58529b2c03-operator-scripts\") pod \"cinder-4c8a-account-create-update-psrpm\" (UID: \"fbee97db-a8f1-43e0-ac0b-ec58529b2c03\") " pod="openstack/cinder-4c8a-account-create-update-psrpm" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.961305 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkbv8\" (UniqueName: \"kubernetes.io/projected/fbee97db-a8f1-43e0-ac0b-ec58529b2c03-kube-api-access-xkbv8\") pod \"cinder-4c8a-account-create-update-psrpm\" (UID: \"fbee97db-a8f1-43e0-ac0b-ec58529b2c03\") " pod="openstack/cinder-4c8a-account-create-update-psrpm" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.961333 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7bc2\" (UniqueName: \"kubernetes.io/projected/115d89c5-8038-4b55-9f1d-d0f169ee0b53-kube-api-access-l7bc2\") pod \"neutron-db-create-jgv94\" (UID: \"115d89c5-8038-4b55-9f1d-d0f169ee0b53\") " pod="openstack/neutron-db-create-jgv94" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.962173 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbee97db-a8f1-43e0-ac0b-ec58529b2c03-operator-scripts\") pod \"cinder-4c8a-account-create-update-psrpm\" (UID: \"fbee97db-a8f1-43e0-ac0b-ec58529b2c03\") " pod="openstack/cinder-4c8a-account-create-update-psrpm" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.962505 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-140c-account-create-update-csqgp" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.970775 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wzm6z" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.973255 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.981970 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-140c-account-create-update-csqgp"] Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.997092 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkbv8\" (UniqueName: \"kubernetes.io/projected/fbee97db-a8f1-43e0-ac0b-ec58529b2c03-kube-api-access-xkbv8\") pod \"cinder-4c8a-account-create-update-psrpm\" (UID: \"fbee97db-a8f1-43e0-ac0b-ec58529b2c03\") " pod="openstack/cinder-4c8a-account-create-update-psrpm" Jan 29 11:15:37 crc kubenswrapper[4593]: I0129 11:15:37.057343 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4c8a-account-create-update-psrpm" Jan 29 11:15:37 crc kubenswrapper[4593]: I0129 11:15:37.063254 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7bc2\" (UniqueName: \"kubernetes.io/projected/115d89c5-8038-4b55-9f1d-d0f169ee0b53-kube-api-access-l7bc2\") pod \"neutron-db-create-jgv94\" (UID: \"115d89c5-8038-4b55-9f1d-d0f169ee0b53\") " pod="openstack/neutron-db-create-jgv94" Jan 29 11:15:37 crc kubenswrapper[4593]: I0129 11:15:37.063400 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ef7a572-9631-4078-a6ed-419d2a4dfdf9-operator-scripts\") pod \"neutron-140c-account-create-update-csqgp\" (UID: \"1ef7a572-9631-4078-a6ed-419d2a4dfdf9\") " pod="openstack/neutron-140c-account-create-update-csqgp" Jan 29 11:15:37 crc kubenswrapper[4593]: I0129 11:15:37.063469 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tshsl\" (UniqueName: \"kubernetes.io/projected/1ef7a572-9631-4078-a6ed-419d2a4dfdf9-kube-api-access-tshsl\") pod \"neutron-140c-account-create-update-csqgp\" (UID: \"1ef7a572-9631-4078-a6ed-419d2a4dfdf9\") " pod="openstack/neutron-140c-account-create-update-csqgp" Jan 29 11:15:37 crc kubenswrapper[4593]: I0129 11:15:37.063968 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/115d89c5-8038-4b55-9f1d-d0f169ee0b53-operator-scripts\") pod \"neutron-db-create-jgv94\" (UID: \"115d89c5-8038-4b55-9f1d-d0f169ee0b53\") " pod="openstack/neutron-db-create-jgv94" Jan 29 11:15:37 crc kubenswrapper[4593]: I0129 11:15:37.064737 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/115d89c5-8038-4b55-9f1d-d0f169ee0b53-operator-scripts\") pod \"neutron-db-create-jgv94\" (UID: \"115d89c5-8038-4b55-9f1d-d0f169ee0b53\") " pod="openstack/neutron-db-create-jgv94" Jan 29 11:15:37 crc kubenswrapper[4593]: I0129 11:15:37.081056 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7bc2\" (UniqueName: \"kubernetes.io/projected/115d89c5-8038-4b55-9f1d-d0f169ee0b53-kube-api-access-l7bc2\") pod \"neutron-db-create-jgv94\" (UID: \"115d89c5-8038-4b55-9f1d-d0f169ee0b53\") " pod="openstack/neutron-db-create-jgv94" Jan 29 11:15:37 crc kubenswrapper[4593]: I0129 11:15:37.154796 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jgv94" Jan 29 11:15:37 crc kubenswrapper[4593]: I0129 11:15:37.165772 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ef7a572-9631-4078-a6ed-419d2a4dfdf9-operator-scripts\") pod \"neutron-140c-account-create-update-csqgp\" (UID: \"1ef7a572-9631-4078-a6ed-419d2a4dfdf9\") " pod="openstack/neutron-140c-account-create-update-csqgp" Jan 29 11:15:37 crc kubenswrapper[4593]: I0129 11:15:37.165861 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tshsl\" (UniqueName: \"kubernetes.io/projected/1ef7a572-9631-4078-a6ed-419d2a4dfdf9-kube-api-access-tshsl\") pod \"neutron-140c-account-create-update-csqgp\" (UID: \"1ef7a572-9631-4078-a6ed-419d2a4dfdf9\") " pod="openstack/neutron-140c-account-create-update-csqgp" Jan 29 11:15:37 crc kubenswrapper[4593]: I0129 11:15:37.166492 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ef7a572-9631-4078-a6ed-419d2a4dfdf9-operator-scripts\") pod \"neutron-140c-account-create-update-csqgp\" (UID: \"1ef7a572-9631-4078-a6ed-419d2a4dfdf9\") " pod="openstack/neutron-140c-account-create-update-csqgp" Jan 29 11:15:37 crc kubenswrapper[4593]: I0129 11:15:37.186822 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tshsl\" (UniqueName: \"kubernetes.io/projected/1ef7a572-9631-4078-a6ed-419d2a4dfdf9-kube-api-access-tshsl\") pod \"neutron-140c-account-create-update-csqgp\" (UID: \"1ef7a572-9631-4078-a6ed-419d2a4dfdf9\") " pod="openstack/neutron-140c-account-create-update-csqgp" Jan 29 11:15:37 crc kubenswrapper[4593]: I0129 11:15:37.286107 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-140c-account-create-update-csqgp" Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.839415 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.853251 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-625ls" Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.894537 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-log-ovn\") pod \"6405a039-ae6d-4255-891c-ef8452e19df3\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.894583 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6405a039-ae6d-4255-891c-ef8452e19df3-scripts\") pod \"6405a039-ae6d-4255-891c-ef8452e19df3\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.894794 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-run\") pod \"6405a039-ae6d-4255-891c-ef8452e19df3\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.894868 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m28jc\" (UniqueName: \"kubernetes.io/projected/6405a039-ae6d-4255-891c-ef8452e19df3-kube-api-access-m28jc\") pod \"6405a039-ae6d-4255-891c-ef8452e19df3\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.894896 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6405a039-ae6d-4255-891c-ef8452e19df3-additional-scripts\") pod \"6405a039-ae6d-4255-891c-ef8452e19df3\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.894917 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-run-ovn\") pod \"6405a039-ae6d-4255-891c-ef8452e19df3\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.895347 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "6405a039-ae6d-4255-891c-ef8452e19df3" (UID: "6405a039-ae6d-4255-891c-ef8452e19df3"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.895398 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "6405a039-ae6d-4255-891c-ef8452e19df3" (UID: "6405a039-ae6d-4255-891c-ef8452e19df3"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.896711 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6405a039-ae6d-4255-891c-ef8452e19df3-scripts" (OuterVolumeSpecName: "scripts") pod "6405a039-ae6d-4255-891c-ef8452e19df3" (UID: "6405a039-ae6d-4255-891c-ef8452e19df3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.896745 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-run" (OuterVolumeSpecName: "var-run") pod "6405a039-ae6d-4255-891c-ef8452e19df3" (UID: "6405a039-ae6d-4255-891c-ef8452e19df3"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.906384 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6405a039-ae6d-4255-891c-ef8452e19df3-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "6405a039-ae6d-4255-891c-ef8452e19df3" (UID: "6405a039-ae6d-4255-891c-ef8452e19df3"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.922076 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6405a039-ae6d-4255-891c-ef8452e19df3-kube-api-access-m28jc" (OuterVolumeSpecName: "kube-api-access-m28jc") pod "6405a039-ae6d-4255-891c-ef8452e19df3" (UID: "6405a039-ae6d-4255-891c-ef8452e19df3"). InnerVolumeSpecName "kube-api-access-m28jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.993024 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-625ls" event={"ID":"56d59502-9350-4842-bd01-35d55f0b47fa","Type":"ContainerDied","Data":"0fe30972eae6fe027a2826fd5f842e093abe225a13c6181792f977be2efdbfe1"} Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.993068 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fe30972eae6fe027a2826fd5f842e093abe225a13c6181792f977be2efdbfe1" Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.993141 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-625ls" Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.995786 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cc9qq-config-tbd2h" event={"ID":"6405a039-ae6d-4255-891c-ef8452e19df3","Type":"ContainerDied","Data":"a351d8b59f6d9bf56172509fb205e45b06b6feb5fd43ab7a09b461eb2ac5e62e"} Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.995812 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a351d8b59f6d9bf56172509fb205e45b06b6feb5fd43ab7a09b461eb2ac5e62e" Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.995861 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:39 crc kubenswrapper[4593]: I0129 11:15:39.000244 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56d59502-9350-4842-bd01-35d55f0b47fa-operator-scripts\") pod \"56d59502-9350-4842-bd01-35d55f0b47fa\" (UID: \"56d59502-9350-4842-bd01-35d55f0b47fa\") " Jan 29 11:15:39 crc kubenswrapper[4593]: I0129 11:15:39.000439 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mckcx\" (UniqueName: \"kubernetes.io/projected/56d59502-9350-4842-bd01-35d55f0b47fa-kube-api-access-mckcx\") pod \"56d59502-9350-4842-bd01-35d55f0b47fa\" (UID: \"56d59502-9350-4842-bd01-35d55f0b47fa\") " Jan 29 11:15:39 crc kubenswrapper[4593]: I0129 11:15:39.000969 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m28jc\" (UniqueName: \"kubernetes.io/projected/6405a039-ae6d-4255-891c-ef8452e19df3-kube-api-access-m28jc\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:39 crc kubenswrapper[4593]: I0129 11:15:39.000988 4593 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6405a039-ae6d-4255-891c-ef8452e19df3-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:39 crc kubenswrapper[4593]: I0129 11:15:39.001000 4593 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:39 crc kubenswrapper[4593]: I0129 11:15:39.001011 4593 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:39 crc kubenswrapper[4593]: I0129 11:15:39.001024 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6405a039-ae6d-4255-891c-ef8452e19df3-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:39 crc kubenswrapper[4593]: I0129 11:15:39.001035 4593 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:39 crc kubenswrapper[4593]: I0129 11:15:39.006068 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56d59502-9350-4842-bd01-35d55f0b47fa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "56d59502-9350-4842-bd01-35d55f0b47fa" (UID: "56d59502-9350-4842-bd01-35d55f0b47fa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:39 crc kubenswrapper[4593]: I0129 11:15:39.025526 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56d59502-9350-4842-bd01-35d55f0b47fa-kube-api-access-mckcx" (OuterVolumeSpecName: "kube-api-access-mckcx") pod "56d59502-9350-4842-bd01-35d55f0b47fa" (UID: "56d59502-9350-4842-bd01-35d55f0b47fa"). InnerVolumeSpecName "kube-api-access-mckcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:39 crc kubenswrapper[4593]: E0129 11:15:39.029864 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 29 11:15:39 crc kubenswrapper[4593]: E0129 11:15:39.033775 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z4lrf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-db54x_openstack(a6bbbb39-f79c-4647-976b-6225ac21e63b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:15:39 crc kubenswrapper[4593]: E0129 11:15:39.034881 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-db54x" podUID="a6bbbb39-f79c-4647-976b-6225ac21e63b" Jan 29 11:15:39 crc kubenswrapper[4593]: I0129 11:15:39.103855 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56d59502-9350-4842-bd01-35d55f0b47fa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:39 crc kubenswrapper[4593]: I0129 11:15:39.104082 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mckcx\" (UniqueName: \"kubernetes.io/projected/56d59502-9350-4842-bd01-35d55f0b47fa-kube-api-access-mckcx\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:39 crc kubenswrapper[4593]: I0129 11:15:39.407539 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k4l8n"] Jan 29 11:15:39 crc kubenswrapper[4593]: I0129 11:15:39.714620 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-9hskn"] Jan 29 11:15:39 crc kubenswrapper[4593]: W0129 11:15:39.725806 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c572c7d_971f_4f21_81cf_f5d5f7d5d9fe.slice/crio-ff9bb89b7c902aa21b9563266c9cfb7fe9ad60b48ff7722f5eef3b62b09f4d0d WatchSource:0}: Error finding container ff9bb89b7c902aa21b9563266c9cfb7fe9ad60b48ff7722f5eef3b62b09f4d0d: Status 404 returned error can't find the container with id ff9bb89b7c902aa21b9563266c9cfb7fe9ad60b48ff7722f5eef3b62b09f4d0d Jan 29 11:15:39 crc kubenswrapper[4593]: I0129 11:15:39.769309 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-140c-account-create-update-csqgp"] Jan 29 11:15:39 crc kubenswrapper[4593]: I0129 11:15:39.858454 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0486-account-create-update-f9r68"] Jan 29 11:15:39 crc kubenswrapper[4593]: W0129 11:15:39.883084 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d46f220_cb33_4768_91f5_c59e98c41af4.slice/crio-a5e37bfdfc03a2951aa661f5cbff45c0faebf3f66c8b535b8e89b5cc0fa0f8db WatchSource:0}: Error finding container a5e37bfdfc03a2951aa661f5cbff45c0faebf3f66c8b535b8e89b5cc0fa0f8db: Status 404 returned error can't find the container with id a5e37bfdfc03a2951aa661f5cbff45c0faebf3f66c8b535b8e89b5cc0fa0f8db Jan 29 11:15:40 crc kubenswrapper[4593]: I0129 11:15:40.019585 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-140c-account-create-update-csqgp" event={"ID":"1ef7a572-9631-4078-a6ed-419d2a4dfdf9","Type":"ContainerStarted","Data":"81b776500c98b0a9276a4f2e3935ca69f3a82dbb87538e400d856f7bf4e5802a"} Jan 29 11:15:40 crc kubenswrapper[4593]: I0129 11:15:40.029889 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4q5nh" event={"ID":"fef7c251-cfb4-4d34-995d-1994b7a8dbe3","Type":"ContainerStarted","Data":"40d4746c878ae8363cafa2fcc314b2c7cfd9f6b73acda03b1c6d583170650c6b"} Jan 29 11:15:40 crc kubenswrapper[4593]: I0129 11:15:40.032399 4593 generic.go:334] "Generic (PLEG): container finished" podID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerID="ba88dc4008912aff189fbe9ab60d1200804baf565d0d0ee6b15f03364bbef410" exitCode=0 Jan 29 11:15:40 crc kubenswrapper[4593]: I0129 11:15:40.032459 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4l8n" event={"ID":"9194cbfb-27b9-47e8-90eb-64b9391d0b07","Type":"ContainerDied","Data":"ba88dc4008912aff189fbe9ab60d1200804baf565d0d0ee6b15f03364bbef410"} Jan 29 11:15:40 crc kubenswrapper[4593]: I0129 11:15:40.032483 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4l8n" event={"ID":"9194cbfb-27b9-47e8-90eb-64b9391d0b07","Type":"ContainerStarted","Data":"5ea6d9d61fd2cf95d30b451aea020cc55aa6add991037bc5209ce7d2a046ef7e"} Jan 29 11:15:40 crc kubenswrapper[4593]: I0129 11:15:40.051334 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0486-account-create-update-f9r68" event={"ID":"6d46f220-cb33-4768-91f5-c59e98c41af4","Type":"ContainerStarted","Data":"a5e37bfdfc03a2951aa661f5cbff45c0faebf3f66c8b535b8e89b5cc0fa0f8db"} Jan 29 11:15:40 crc kubenswrapper[4593]: I0129 11:15:40.062088 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-9hskn" event={"ID":"7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe","Type":"ContainerStarted","Data":"ff9bb89b7c902aa21b9563266c9cfb7fe9ad60b48ff7722f5eef3b62b09f4d0d"} Jan 29 11:15:40 crc kubenswrapper[4593]: E0129 11:15:40.062923 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-db54x" podUID="a6bbbb39-f79c-4647-976b-6225ac21e63b" Jan 29 11:15:40 crc kubenswrapper[4593]: I0129 11:15:40.082768 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-cc9qq-config-tbd2h"] Jan 29 11:15:40 crc kubenswrapper[4593]: I0129 11:15:40.092919 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4q5nh" podStartSLOduration=4.660458898 podStartE2EDuration="19.092897018s" podCreationTimestamp="2026-01-29 11:15:21 +0000 UTC" firstStartedPulling="2026-01-29 11:15:24.787839432 +0000 UTC m=+990.660873623" lastFinishedPulling="2026-01-29 11:15:39.220277552 +0000 UTC m=+1005.093311743" observedRunningTime="2026-01-29 11:15:40.075113045 +0000 UTC m=+1005.948147236" watchObservedRunningTime="2026-01-29 11:15:40.092897018 +0000 UTC m=+1005.965931199" Jan 29 11:15:40 crc kubenswrapper[4593]: I0129 11:15:40.105830 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-cc9qq-config-tbd2h"] Jan 29 11:15:40 crc kubenswrapper[4593]: I0129 11:15:40.113010 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-wzm6z"] Jan 29 11:15:40 crc kubenswrapper[4593]: I0129 11:15:40.187902 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-vdz52"] Jan 29 11:15:40 crc kubenswrapper[4593]: I0129 11:15:40.224042 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-4c8a-account-create-update-psrpm"] Jan 29 11:15:40 crc kubenswrapper[4593]: I0129 11:15:40.235017 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jgv94"] Jan 29 11:15:40 crc kubenswrapper[4593]: I0129 11:15:40.304251 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.073791 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wzm6z" event={"ID":"9c0b4a25-540c-47dd-96fb-fdc6872721b5","Type":"ContainerStarted","Data":"19987dc4123000c07157f5b274ec3539c6844f271738b4bce8683858a4a97786"} Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.080249 4593 generic.go:334] "Generic (PLEG): container finished" podID="6d46f220-cb33-4768-91f5-c59e98c41af4" containerID="db6e520018218e0ecd1d4a8d69f63a0e96eea393f5e0abbccf345503319fb4c2" exitCode=0 Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.084180 4593 generic.go:334] "Generic (PLEG): container finished" podID="7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe" containerID="43d82ed1472c3625ce9296a41e8408518af652ca97d81bd779f6e88331c78c4e" exitCode=0 Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.113465 4593 generic.go:334] "Generic (PLEG): container finished" podID="fbee97db-a8f1-43e0-ac0b-ec58529b2c03" containerID="8daab26085422d8b821fec9dd8845576bd1f7996b7bd02a206e4ec1ed954891a" exitCode=0 Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.137423 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6405a039-ae6d-4255-891c-ef8452e19df3" path="/var/lib/kubelet/pods/6405a039-ae6d-4255-891c-ef8452e19df3/volumes" Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.138084 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"307ad072-fdfc-4c55-8891-bc041d755b1a","Type":"ContainerStarted","Data":"0d0755f7783c5a3fce0e7aaeb9ebf8fc5a1b0ef602a35a7fd8d076194eb911a5"} Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.138115 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0486-account-create-update-f9r68" event={"ID":"6d46f220-cb33-4768-91f5-c59e98c41af4","Type":"ContainerDied","Data":"db6e520018218e0ecd1d4a8d69f63a0e96eea393f5e0abbccf345503319fb4c2"} Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.138128 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-9hskn" event={"ID":"7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe","Type":"ContainerDied","Data":"43d82ed1472c3625ce9296a41e8408518af652ca97d81bd779f6e88331c78c4e"} Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.138142 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4c8a-account-create-update-psrpm" event={"ID":"fbee97db-a8f1-43e0-ac0b-ec58529b2c03","Type":"ContainerDied","Data":"8daab26085422d8b821fec9dd8845576bd1f7996b7bd02a206e4ec1ed954891a"} Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.138153 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4c8a-account-create-update-psrpm" event={"ID":"fbee97db-a8f1-43e0-ac0b-ec58529b2c03","Type":"ContainerStarted","Data":"52c3d566be62f7b3d906eb419cd5398b1f874dac4318e3b655d95285b1760187"} Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.143503 4593 generic.go:334] "Generic (PLEG): container finished" podID="1ef7a572-9631-4078-a6ed-419d2a4dfdf9" containerID="d302776b71ae9de08283f287bc6180cc80cb27e0867558e7d6ef7199f716f657" exitCode=0 Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.143586 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-140c-account-create-update-csqgp" event={"ID":"1ef7a572-9631-4078-a6ed-419d2a4dfdf9","Type":"ContainerDied","Data":"d302776b71ae9de08283f287bc6180cc80cb27e0867558e7d6ef7199f716f657"} Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.146008 4593 generic.go:334] "Generic (PLEG): container finished" podID="52b59817-1d9d-431d-8055-cf98107b89a2" containerID="26e9d793caead0da7c6fbe2d2cc88998f753f02199ec672516904069fc61c2fc" exitCode=0 Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.146067 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vdz52" event={"ID":"52b59817-1d9d-431d-8055-cf98107b89a2","Type":"ContainerDied","Data":"26e9d793caead0da7c6fbe2d2cc88998f753f02199ec672516904069fc61c2fc"} Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.146082 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vdz52" event={"ID":"52b59817-1d9d-431d-8055-cf98107b89a2","Type":"ContainerStarted","Data":"b785ccbd805876d6971e08b5433aca3992b45b4e6be43abcc2d0897531f24fb0"} Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.149253 4593 generic.go:334] "Generic (PLEG): container finished" podID="115d89c5-8038-4b55-9f1d-d0f169ee0b53" containerID="9d37cf9a7f03d5742ea9e7314623a8e8f189e15526f469c97b71739526cfc70b" exitCode=0 Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.149277 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jgv94" event={"ID":"115d89c5-8038-4b55-9f1d-d0f169ee0b53","Type":"ContainerDied","Data":"9d37cf9a7f03d5742ea9e7314623a8e8f189e15526f469c97b71739526cfc70b"} Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.149292 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jgv94" event={"ID":"115d89c5-8038-4b55-9f1d-d0f169ee0b53","Type":"ContainerStarted","Data":"31a0af7b667010f12dd92d2c3d2bdcf8d785c222dccec254e7c9ab66ac0c956c"} Jan 29 11:15:42 crc kubenswrapper[4593]: I0129 11:15:42.160906 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"307ad072-fdfc-4c55-8891-bc041d755b1a","Type":"ContainerStarted","Data":"5ea99dce0931642c048cb124d51210d01f68a0c9d1a827e3958df487a4f80d5c"} Jan 29 11:15:42 crc kubenswrapper[4593]: I0129 11:15:42.168871 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4l8n" event={"ID":"9194cbfb-27b9-47e8-90eb-64b9391d0b07","Type":"ContainerStarted","Data":"193f9b95fdc94b467f23b2f72d7dfa0f28f6b17c0525596eef4f9076227ed84f"} Jan 29 11:15:42 crc kubenswrapper[4593]: I0129 11:15:42.246922 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:15:42 crc kubenswrapper[4593]: I0129 11:15:42.247270 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:15:42 crc kubenswrapper[4593]: I0129 11:15:42.566502 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vdz52" Jan 29 11:15:42 crc kubenswrapper[4593]: I0129 11:15:42.685929 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwlg4\" (UniqueName: \"kubernetes.io/projected/52b59817-1d9d-431d-8055-cf98107b89a2-kube-api-access-lwlg4\") pod \"52b59817-1d9d-431d-8055-cf98107b89a2\" (UID: \"52b59817-1d9d-431d-8055-cf98107b89a2\") " Jan 29 11:15:42 crc kubenswrapper[4593]: I0129 11:15:42.686277 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52b59817-1d9d-431d-8055-cf98107b89a2-operator-scripts\") pod \"52b59817-1d9d-431d-8055-cf98107b89a2\" (UID: \"52b59817-1d9d-431d-8055-cf98107b89a2\") " Jan 29 11:15:42 crc kubenswrapper[4593]: I0129 11:15:42.687673 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52b59817-1d9d-431d-8055-cf98107b89a2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "52b59817-1d9d-431d-8055-cf98107b89a2" (UID: "52b59817-1d9d-431d-8055-cf98107b89a2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:42 crc kubenswrapper[4593]: I0129 11:15:42.693366 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52b59817-1d9d-431d-8055-cf98107b89a2-kube-api-access-lwlg4" (OuterVolumeSpecName: "kube-api-access-lwlg4") pod "52b59817-1d9d-431d-8055-cf98107b89a2" (UID: "52b59817-1d9d-431d-8055-cf98107b89a2"). InnerVolumeSpecName "kube-api-access-lwlg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:42 crc kubenswrapper[4593]: I0129 11:15:42.790610 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwlg4\" (UniqueName: \"kubernetes.io/projected/52b59817-1d9d-431d-8055-cf98107b89a2-kube-api-access-lwlg4\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:42 crc kubenswrapper[4593]: I0129 11:15:42.790662 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52b59817-1d9d-431d-8055-cf98107b89a2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.004549 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-140c-account-create-update-csqgp" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.010658 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9hskn" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.019617 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0486-account-create-update-f9r68" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.038184 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jgv94" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.047976 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4c8a-account-create-update-psrpm" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.096889 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/115d89c5-8038-4b55-9f1d-d0f169ee0b53-operator-scripts\") pod \"115d89c5-8038-4b55-9f1d-d0f169ee0b53\" (UID: \"115d89c5-8038-4b55-9f1d-d0f169ee0b53\") " Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.098540 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkbv8\" (UniqueName: \"kubernetes.io/projected/fbee97db-a8f1-43e0-ac0b-ec58529b2c03-kube-api-access-xkbv8\") pod \"fbee97db-a8f1-43e0-ac0b-ec58529b2c03\" (UID: \"fbee97db-a8f1-43e0-ac0b-ec58529b2c03\") " Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.098827 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/115d89c5-8038-4b55-9f1d-d0f169ee0b53-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "115d89c5-8038-4b55-9f1d-d0f169ee0b53" (UID: "115d89c5-8038-4b55-9f1d-d0f169ee0b53"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.099040 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbee97db-a8f1-43e0-ac0b-ec58529b2c03-operator-scripts\") pod \"fbee97db-a8f1-43e0-ac0b-ec58529b2c03\" (UID: \"fbee97db-a8f1-43e0-ac0b-ec58529b2c03\") " Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.099222 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe-operator-scripts\") pod \"7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe\" (UID: \"7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe\") " Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.099364 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d46f220-cb33-4768-91f5-c59e98c41af4-operator-scripts\") pod \"6d46f220-cb33-4768-91f5-c59e98c41af4\" (UID: \"6d46f220-cb33-4768-91f5-c59e98c41af4\") " Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.100521 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ef7a572-9631-4078-a6ed-419d2a4dfdf9-operator-scripts\") pod \"1ef7a572-9631-4078-a6ed-419d2a4dfdf9\" (UID: \"1ef7a572-9631-4078-a6ed-419d2a4dfdf9\") " Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.100650 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tshsl\" (UniqueName: \"kubernetes.io/projected/1ef7a572-9631-4078-a6ed-419d2a4dfdf9-kube-api-access-tshsl\") pod \"1ef7a572-9631-4078-a6ed-419d2a4dfdf9\" (UID: \"1ef7a572-9631-4078-a6ed-419d2a4dfdf9\") " Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.100859 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l48ll\" (UniqueName: \"kubernetes.io/projected/6d46f220-cb33-4768-91f5-c59e98c41af4-kube-api-access-l48ll\") pod \"6d46f220-cb33-4768-91f5-c59e98c41af4\" (UID: \"6d46f220-cb33-4768-91f5-c59e98c41af4\") " Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.102039 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7bc2\" (UniqueName: \"kubernetes.io/projected/115d89c5-8038-4b55-9f1d-d0f169ee0b53-kube-api-access-l7bc2\") pod \"115d89c5-8038-4b55-9f1d-d0f169ee0b53\" (UID: \"115d89c5-8038-4b55-9f1d-d0f169ee0b53\") " Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.102756 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nxmp\" (UniqueName: \"kubernetes.io/projected/7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe-kube-api-access-9nxmp\") pod \"7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe\" (UID: \"7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe\") " Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.103563 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/115d89c5-8038-4b55-9f1d-d0f169ee0b53-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.101016 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbee97db-a8f1-43e0-ac0b-ec58529b2c03-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fbee97db-a8f1-43e0-ac0b-ec58529b2c03" (UID: "fbee97db-a8f1-43e0-ac0b-ec58529b2c03"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.101416 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe" (UID: "7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.102037 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d46f220-cb33-4768-91f5-c59e98c41af4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6d46f220-cb33-4768-91f5-c59e98c41af4" (UID: "6d46f220-cb33-4768-91f5-c59e98c41af4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.107787 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe-kube-api-access-9nxmp" (OuterVolumeSpecName: "kube-api-access-9nxmp") pod "7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe" (UID: "7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe"). InnerVolumeSpecName "kube-api-access-9nxmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.110759 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d46f220-cb33-4768-91f5-c59e98c41af4-kube-api-access-l48ll" (OuterVolumeSpecName: "kube-api-access-l48ll") pod "6d46f220-cb33-4768-91f5-c59e98c41af4" (UID: "6d46f220-cb33-4768-91f5-c59e98c41af4"). InnerVolumeSpecName "kube-api-access-l48ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.116828 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbee97db-a8f1-43e0-ac0b-ec58529b2c03-kube-api-access-xkbv8" (OuterVolumeSpecName: "kube-api-access-xkbv8") pod "fbee97db-a8f1-43e0-ac0b-ec58529b2c03" (UID: "fbee97db-a8f1-43e0-ac0b-ec58529b2c03"). InnerVolumeSpecName "kube-api-access-xkbv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.125032 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/115d89c5-8038-4b55-9f1d-d0f169ee0b53-kube-api-access-l7bc2" (OuterVolumeSpecName: "kube-api-access-l7bc2") pod "115d89c5-8038-4b55-9f1d-d0f169ee0b53" (UID: "115d89c5-8038-4b55-9f1d-d0f169ee0b53"). InnerVolumeSpecName "kube-api-access-l7bc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.127798 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ef7a572-9631-4078-a6ed-419d2a4dfdf9-kube-api-access-tshsl" (OuterVolumeSpecName: "kube-api-access-tshsl") pod "1ef7a572-9631-4078-a6ed-419d2a4dfdf9" (UID: "1ef7a572-9631-4078-a6ed-419d2a4dfdf9"). InnerVolumeSpecName "kube-api-access-tshsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.154447 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ef7a572-9631-4078-a6ed-419d2a4dfdf9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1ef7a572-9631-4078-a6ed-419d2a4dfdf9" (UID: "1ef7a572-9631-4078-a6ed-419d2a4dfdf9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.183476 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-9hskn" event={"ID":"7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe","Type":"ContainerDied","Data":"ff9bb89b7c902aa21b9563266c9cfb7fe9ad60b48ff7722f5eef3b62b09f4d0d"} Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.183520 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff9bb89b7c902aa21b9563266c9cfb7fe9ad60b48ff7722f5eef3b62b09f4d0d" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.183581 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9hskn" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.187814 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4c8a-account-create-update-psrpm" event={"ID":"fbee97db-a8f1-43e0-ac0b-ec58529b2c03","Type":"ContainerDied","Data":"52c3d566be62f7b3d906eb419cd5398b1f874dac4318e3b655d95285b1760187"} Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.188191 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52c3d566be62f7b3d906eb419cd5398b1f874dac4318e3b655d95285b1760187" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.187821 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4c8a-account-create-update-psrpm" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.189943 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-140c-account-create-update-csqgp" event={"ID":"1ef7a572-9631-4078-a6ed-419d2a4dfdf9","Type":"ContainerDied","Data":"81b776500c98b0a9276a4f2e3935ca69f3a82dbb87538e400d856f7bf4e5802a"} Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.189975 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81b776500c98b0a9276a4f2e3935ca69f3a82dbb87538e400d856f7bf4e5802a" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.190027 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-140c-account-create-update-csqgp" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.192694 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vdz52" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.192769 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vdz52" event={"ID":"52b59817-1d9d-431d-8055-cf98107b89a2","Type":"ContainerDied","Data":"b785ccbd805876d6971e08b5433aca3992b45b4e6be43abcc2d0897531f24fb0"} Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.192829 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b785ccbd805876d6971e08b5433aca3992b45b4e6be43abcc2d0897531f24fb0" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.194313 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jgv94" event={"ID":"115d89c5-8038-4b55-9f1d-d0f169ee0b53","Type":"ContainerDied","Data":"31a0af7b667010f12dd92d2c3d2bdcf8d785c222dccec254e7c9ab66ac0c956c"} Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.194342 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31a0af7b667010f12dd92d2c3d2bdcf8d785c222dccec254e7c9ab66ac0c956c" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.194392 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jgv94" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.200015 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"307ad072-fdfc-4c55-8891-bc041d755b1a","Type":"ContainerStarted","Data":"fa7e71015c1b2be01d5f5981751087bd1cea0cca46687ab9c86c925c42c245ce"} Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.200100 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"307ad072-fdfc-4c55-8891-bc041d755b1a","Type":"ContainerStarted","Data":"90ef97ef119e260947d77b74c01609fa837e2c9223961887abf5012eb91089f8"} Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.200229 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"307ad072-fdfc-4c55-8891-bc041d755b1a","Type":"ContainerStarted","Data":"1d71f5edac5c04adc917e6e121934d8398671db0557c20eb1573f86276c682d3"} Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.202384 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0486-account-create-update-f9r68" event={"ID":"6d46f220-cb33-4768-91f5-c59e98c41af4","Type":"ContainerDied","Data":"a5e37bfdfc03a2951aa661f5cbff45c0faebf3f66c8b535b8e89b5cc0fa0f8db"} Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.202409 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0486-account-create-update-f9r68" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.202416 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5e37bfdfc03a2951aa661f5cbff45c0faebf3f66c8b535b8e89b5cc0fa0f8db" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.206768 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ef7a572-9631-4078-a6ed-419d2a4dfdf9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.206795 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tshsl\" (UniqueName: \"kubernetes.io/projected/1ef7a572-9631-4078-a6ed-419d2a4dfdf9-kube-api-access-tshsl\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.206807 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l48ll\" (UniqueName: \"kubernetes.io/projected/6d46f220-cb33-4768-91f5-c59e98c41af4-kube-api-access-l48ll\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.206816 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7bc2\" (UniqueName: \"kubernetes.io/projected/115d89c5-8038-4b55-9f1d-d0f169ee0b53-kube-api-access-l7bc2\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.206825 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nxmp\" (UniqueName: \"kubernetes.io/projected/7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe-kube-api-access-9nxmp\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.206834 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkbv8\" (UniqueName: \"kubernetes.io/projected/fbee97db-a8f1-43e0-ac0b-ec58529b2c03-kube-api-access-xkbv8\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.206843 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbee97db-a8f1-43e0-ac0b-ec58529b2c03-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.206852 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.206861 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d46f220-cb33-4768-91f5-c59e98c41af4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.351514 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4q5nh" podUID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" containerName="registry-server" probeResult="failure" output=< Jan 29 11:15:43 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:15:43 crc kubenswrapper[4593]: > Jan 29 11:15:47 crc kubenswrapper[4593]: I0129 11:15:47.247403 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wzm6z" event={"ID":"9c0b4a25-540c-47dd-96fb-fdc6872721b5","Type":"ContainerStarted","Data":"b2e16a35b6612eefbbea849496217b01c0c3973f0a5bc7ad6ae362ff548b8cf0"} Jan 29 11:15:47 crc kubenswrapper[4593]: I0129 11:15:47.263965 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-wzm6z" podStartSLOduration=4.648576027 podStartE2EDuration="11.263941973s" podCreationTimestamp="2026-01-29 11:15:36 +0000 UTC" firstStartedPulling="2026-01-29 11:15:40.127279466 +0000 UTC m=+1006.000313657" lastFinishedPulling="2026-01-29 11:15:46.742645412 +0000 UTC m=+1012.615679603" observedRunningTime="2026-01-29 11:15:47.261426876 +0000 UTC m=+1013.134461077" watchObservedRunningTime="2026-01-29 11:15:47.263941973 +0000 UTC m=+1013.136976164" Jan 29 11:15:49 crc kubenswrapper[4593]: I0129 11:15:49.273587 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"307ad072-fdfc-4c55-8891-bc041d755b1a","Type":"ContainerStarted","Data":"c22af3db1da1ae1129b4ec6fe15d486bf3eacf9f0173cc870a43a6edb37e08ac"} Jan 29 11:15:49 crc kubenswrapper[4593]: I0129 11:15:49.273993 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"307ad072-fdfc-4c55-8891-bc041d755b1a","Type":"ContainerStarted","Data":"8533b35814b444f9ddc2d79d0a6e8fb8e59a8ae2d286b48ff34f52ab8340e70e"} Jan 29 11:15:50 crc kubenswrapper[4593]: I0129 11:15:50.289050 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"307ad072-fdfc-4c55-8891-bc041d755b1a","Type":"ContainerStarted","Data":"f9fba7c0509323453d3cf6ed2a1801c969ce5c3c1a673fb0c483cea4ca0554e7"} Jan 29 11:15:50 crc kubenswrapper[4593]: I0129 11:15:50.289096 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"307ad072-fdfc-4c55-8891-bc041d755b1a","Type":"ContainerStarted","Data":"43dff5e70b4ad12e2e56d06fc999ce3dd5f51c617c48da5ef14dfbd5eb6bb928"} Jan 29 11:15:51 crc kubenswrapper[4593]: I0129 11:15:51.303238 4593 generic.go:334] "Generic (PLEG): container finished" podID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerID="193f9b95fdc94b467f23b2f72d7dfa0f28f6b17c0525596eef4f9076227ed84f" exitCode=0 Jan 29 11:15:51 crc kubenswrapper[4593]: I0129 11:15:51.303290 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4l8n" event={"ID":"9194cbfb-27b9-47e8-90eb-64b9391d0b07","Type":"ContainerDied","Data":"193f9b95fdc94b467f23b2f72d7dfa0f28f6b17c0525596eef4f9076227ed84f"} Jan 29 11:15:52 crc kubenswrapper[4593]: I0129 11:15:52.311905 4593 generic.go:334] "Generic (PLEG): container finished" podID="9c0b4a25-540c-47dd-96fb-fdc6872721b5" containerID="b2e16a35b6612eefbbea849496217b01c0c3973f0a5bc7ad6ae362ff548b8cf0" exitCode=0 Jan 29 11:15:52 crc kubenswrapper[4593]: I0129 11:15:52.311988 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wzm6z" event={"ID":"9c0b4a25-540c-47dd-96fb-fdc6872721b5","Type":"ContainerDied","Data":"b2e16a35b6612eefbbea849496217b01c0c3973f0a5bc7ad6ae362ff548b8cf0"} Jan 29 11:15:52 crc kubenswrapper[4593]: I0129 11:15:52.321891 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"307ad072-fdfc-4c55-8891-bc041d755b1a","Type":"ContainerStarted","Data":"8c937d48b6809e97a05669102c342c5012c0365005aae5e341168f784ebf2fe5"} Jan 29 11:15:52 crc kubenswrapper[4593]: I0129 11:15:52.321940 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"307ad072-fdfc-4c55-8891-bc041d755b1a","Type":"ContainerStarted","Data":"9ec374c157e39e6657c33c07e3522999ff1ac300e55747d5335dfb5e0bb6a420"} Jan 29 11:15:52 crc kubenswrapper[4593]: I0129 11:15:52.321951 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"307ad072-fdfc-4c55-8891-bc041d755b1a","Type":"ContainerStarted","Data":"78ed3445d2f7349c2a6010e30322a72662c800595c4d47b86979e008ede84af8"} Jan 29 11:15:52 crc kubenswrapper[4593]: I0129 11:15:52.321963 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"307ad072-fdfc-4c55-8891-bc041d755b1a","Type":"ContainerStarted","Data":"357726caca141c948b187325349278573ec5989439588cb4329e0a6ba0004c78"} Jan 29 11:15:52 crc kubenswrapper[4593]: I0129 11:15:52.321972 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"307ad072-fdfc-4c55-8891-bc041d755b1a","Type":"ContainerStarted","Data":"a6ef676591f532dcad332fb732fdb48c9f3ec5a0704446d91ee3e7c9d27193e3"} Jan 29 11:15:52 crc kubenswrapper[4593]: I0129 11:15:52.321983 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"307ad072-fdfc-4c55-8891-bc041d755b1a","Type":"ContainerStarted","Data":"ff1fa00004dc29f1cce6c3f17a1cc1ec156454f9b15dc0635164c8dd81f15278"} Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.302332 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4q5nh" podUID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" containerName="registry-server" probeResult="failure" output=< Jan 29 11:15:53 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:15:53 crc kubenswrapper[4593]: > Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.336963 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"307ad072-fdfc-4c55-8891-bc041d755b1a","Type":"ContainerStarted","Data":"3aa4adc48f32aa56051c740cb98579c90ef0bac7f9e462c434ebd043f8612db0"} Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.339910 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4l8n" event={"ID":"9194cbfb-27b9-47e8-90eb-64b9391d0b07","Type":"ContainerStarted","Data":"392c83c8b20810b83ec9a5ece7d4422790dc84f02f822abe01aa473a1c9a74d9"} Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.341895 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-db54x" event={"ID":"a6bbbb39-f79c-4647-976b-6225ac21e63b","Type":"ContainerStarted","Data":"6029f6551650b545bead0d4f37b1f5f3a81f76cf7f6f139456a1354a00bcaf99"} Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.383978 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=41.538099391 podStartE2EDuration="52.383962131s" podCreationTimestamp="2026-01-29 11:15:01 +0000 UTC" firstStartedPulling="2026-01-29 11:15:40.371548424 +0000 UTC m=+1006.244582615" lastFinishedPulling="2026-01-29 11:15:51.217411164 +0000 UTC m=+1017.090445355" observedRunningTime="2026-01-29 11:15:53.383110818 +0000 UTC m=+1019.256145029" watchObservedRunningTime="2026-01-29 11:15:53.383962131 +0000 UTC m=+1019.256996322" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.406060 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-db54x" podStartSLOduration=2.548786906 podStartE2EDuration="34.40603563s" podCreationTimestamp="2026-01-29 11:15:19 +0000 UTC" firstStartedPulling="2026-01-29 11:15:20.797706862 +0000 UTC m=+986.670741053" lastFinishedPulling="2026-01-29 11:15:52.654955586 +0000 UTC m=+1018.527989777" observedRunningTime="2026-01-29 11:15:53.40189901 +0000 UTC m=+1019.274933211" watchObservedRunningTime="2026-01-29 11:15:53.40603563 +0000 UTC m=+1019.279069821" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.428114 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k4l8n" podStartSLOduration=11.667982055 podStartE2EDuration="24.428095898s" podCreationTimestamp="2026-01-29 11:15:29 +0000 UTC" firstStartedPulling="2026-01-29 11:15:40.040784668 +0000 UTC m=+1005.913818859" lastFinishedPulling="2026-01-29 11:15:52.800898511 +0000 UTC m=+1018.673932702" observedRunningTime="2026-01-29 11:15:53.421429401 +0000 UTC m=+1019.294463592" watchObservedRunningTime="2026-01-29 11:15:53.428095898 +0000 UTC m=+1019.301130089" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.677316 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wzm6z" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.727408 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-4tqv8"] Jan 29 11:15:53 crc kubenswrapper[4593]: E0129 11:15:53.727829 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d46f220-cb33-4768-91f5-c59e98c41af4" containerName="mariadb-account-create-update" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.727846 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d46f220-cb33-4768-91f5-c59e98c41af4" containerName="mariadb-account-create-update" Jan 29 11:15:53 crc kubenswrapper[4593]: E0129 11:15:53.727859 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbee97db-a8f1-43e0-ac0b-ec58529b2c03" containerName="mariadb-account-create-update" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.727866 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbee97db-a8f1-43e0-ac0b-ec58529b2c03" containerName="mariadb-account-create-update" Jan 29 11:15:53 crc kubenswrapper[4593]: E0129 11:15:53.727880 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52b59817-1d9d-431d-8055-cf98107b89a2" containerName="mariadb-database-create" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.727888 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="52b59817-1d9d-431d-8055-cf98107b89a2" containerName="mariadb-database-create" Jan 29 11:15:53 crc kubenswrapper[4593]: E0129 11:15:53.727897 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6405a039-ae6d-4255-891c-ef8452e19df3" containerName="ovn-config" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.727906 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="6405a039-ae6d-4255-891c-ef8452e19df3" containerName="ovn-config" Jan 29 11:15:53 crc kubenswrapper[4593]: E0129 11:15:53.727923 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="115d89c5-8038-4b55-9f1d-d0f169ee0b53" containerName="mariadb-database-create" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.727930 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="115d89c5-8038-4b55-9f1d-d0f169ee0b53" containerName="mariadb-database-create" Jan 29 11:15:53 crc kubenswrapper[4593]: E0129 11:15:53.727943 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ef7a572-9631-4078-a6ed-419d2a4dfdf9" containerName="mariadb-account-create-update" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.727949 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ef7a572-9631-4078-a6ed-419d2a4dfdf9" containerName="mariadb-account-create-update" Jan 29 11:15:53 crc kubenswrapper[4593]: E0129 11:15:53.727961 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe" containerName="mariadb-database-create" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.727969 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe" containerName="mariadb-database-create" Jan 29 11:15:53 crc kubenswrapper[4593]: E0129 11:15:53.727988 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c0b4a25-540c-47dd-96fb-fdc6872721b5" containerName="keystone-db-sync" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.727995 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c0b4a25-540c-47dd-96fb-fdc6872721b5" containerName="keystone-db-sync" Jan 29 11:15:53 crc kubenswrapper[4593]: E0129 11:15:53.728016 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d59502-9350-4842-bd01-35d55f0b47fa" containerName="mariadb-account-create-update" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.728024 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d59502-9350-4842-bd01-35d55f0b47fa" containerName="mariadb-account-create-update" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.728191 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c0b4a25-540c-47dd-96fb-fdc6872721b5" containerName="keystone-db-sync" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.728205 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbee97db-a8f1-43e0-ac0b-ec58529b2c03" containerName="mariadb-account-create-update" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.728220 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d46f220-cb33-4768-91f5-c59e98c41af4" containerName="mariadb-account-create-update" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.728233 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe" containerName="mariadb-database-create" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.728241 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="115d89c5-8038-4b55-9f1d-d0f169ee0b53" containerName="mariadb-database-create" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.728252 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="52b59817-1d9d-431d-8055-cf98107b89a2" containerName="mariadb-database-create" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.728260 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ef7a572-9631-4078-a6ed-419d2a4dfdf9" containerName="mariadb-account-create-update" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.728270 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d59502-9350-4842-bd01-35d55f0b47fa" containerName="mariadb-account-create-update" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.728279 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="6405a039-ae6d-4255-891c-ef8452e19df3" containerName="ovn-config" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.729121 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.731777 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.746080 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-4tqv8"] Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.776398 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c0b4a25-540c-47dd-96fb-fdc6872721b5-combined-ca-bundle\") pod \"9c0b4a25-540c-47dd-96fb-fdc6872721b5\" (UID: \"9c0b4a25-540c-47dd-96fb-fdc6872721b5\") " Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.776556 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c0b4a25-540c-47dd-96fb-fdc6872721b5-config-data\") pod \"9c0b4a25-540c-47dd-96fb-fdc6872721b5\" (UID: \"9c0b4a25-540c-47dd-96fb-fdc6872721b5\") " Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.776666 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnrhr\" (UniqueName: \"kubernetes.io/projected/9c0b4a25-540c-47dd-96fb-fdc6872721b5-kube-api-access-gnrhr\") pod \"9c0b4a25-540c-47dd-96fb-fdc6872721b5\" (UID: \"9c0b4a25-540c-47dd-96fb-fdc6872721b5\") " Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.786294 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c0b4a25-540c-47dd-96fb-fdc6872721b5-kube-api-access-gnrhr" (OuterVolumeSpecName: "kube-api-access-gnrhr") pod "9c0b4a25-540c-47dd-96fb-fdc6872721b5" (UID: "9c0b4a25-540c-47dd-96fb-fdc6872721b5"). InnerVolumeSpecName "kube-api-access-gnrhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.836055 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c0b4a25-540c-47dd-96fb-fdc6872721b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c0b4a25-540c-47dd-96fb-fdc6872721b5" (UID: "9c0b4a25-540c-47dd-96fb-fdc6872721b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.843527 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c0b4a25-540c-47dd-96fb-fdc6872721b5-config-data" (OuterVolumeSpecName: "config-data") pod "9c0b4a25-540c-47dd-96fb-fdc6872721b5" (UID: "9c0b4a25-540c-47dd-96fb-fdc6872721b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.878951 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r8wr\" (UniqueName: \"kubernetes.io/projected/75b7f494-5bdf-48a0-95a4-745655079166-kube-api-access-4r8wr\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.879019 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.879057 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.879178 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.879222 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.879246 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-config\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.879300 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c0b4a25-540c-47dd-96fb-fdc6872721b5-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.879314 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnrhr\" (UniqueName: \"kubernetes.io/projected/9c0b4a25-540c-47dd-96fb-fdc6872721b5-kube-api-access-gnrhr\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.879324 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c0b4a25-540c-47dd-96fb-fdc6872721b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.982740 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.983279 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.983384 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.983532 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.983553 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-config\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.983893 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.984245 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.984421 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.984645 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r8wr\" (UniqueName: \"kubernetes.io/projected/75b7f494-5bdf-48a0-95a4-745655079166-kube-api-access-4r8wr\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.984817 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.985509 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-config\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.004250 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r8wr\" (UniqueName: \"kubernetes.io/projected/75b7f494-5bdf-48a0-95a4-745655079166-kube-api-access-4r8wr\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.048016 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.362556 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wzm6z" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.362713 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wzm6z" event={"ID":"9c0b4a25-540c-47dd-96fb-fdc6872721b5","Type":"ContainerDied","Data":"19987dc4123000c07157f5b274ec3539c6844f271738b4bce8683858a4a97786"} Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.363985 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19987dc4123000c07157f5b274ec3539c6844f271738b4bce8683858a4a97786" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.631419 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-4tqv8"] Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.661917 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-k7lbh"] Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.663721 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.667804 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.668091 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.668199 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-h76tz" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.668256 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.668345 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.756826 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-k7lbh"] Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.783815 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-4tqv8"] Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.804990 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-config-data\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.805029 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-combined-ca-bundle\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.805091 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-credential-keys\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.805140 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjqtw\" (UniqueName: \"kubernetes.io/projected/b3035bcf-246f-4bad-9c08-bd2188aa4098-kube-api-access-tjqtw\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.805158 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-scripts\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.805189 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-fernet-keys\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.847844 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b868669f-fp8w5"] Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.849523 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.874777 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-fp8w5"] Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.908679 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-fernet-keys\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.908836 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-config-data\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.908860 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-combined-ca-bundle\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.908927 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-credential-keys\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.908957 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjqtw\" (UniqueName: \"kubernetes.io/projected/b3035bcf-246f-4bad-9c08-bd2188aa4098-kube-api-access-tjqtw\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.908995 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-scripts\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.914693 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-config-data\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.917325 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-fernet-keys\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.917951 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-combined-ca-bundle\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.918870 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-credential-keys\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.924662 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-scripts\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.957225 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjqtw\" (UniqueName: \"kubernetes.io/projected/b3035bcf-246f-4bad-9c08-bd2188aa4098-kube-api-access-tjqtw\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.011560 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.011908 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.011999 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-config\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.012040 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-dns-svc\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.012107 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.012132 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2f89\" (UniqueName: \"kubernetes.io/projected/622ba42a-ba2c-4296-a192-4342eca1ac9c-kube-api-access-j2f89\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.044401 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-qqbm9"] Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.049099 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.061573 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.061742 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-jhpvr" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.062043 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.170015 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-config\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.170077 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-dns-svc\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.170138 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.170160 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2f89\" (UniqueName: \"kubernetes.io/projected/622ba42a-ba2c-4296-a192-4342eca1ac9c-kube-api-access-j2f89\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.170272 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.170291 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.171194 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.176180 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-h76tz" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.183974 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.221190 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-dns-svc\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.222300 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-config\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.223134 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.223793 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.242844 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qqbm9"] Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.271928 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-db-sync-config-data\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.272031 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-scripts\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.272083 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-config-data\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.272103 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9a0467fe-4786-4231-bf52-8a305e9a4f89-etc-machine-id\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.272144 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb8cj\" (UniqueName: \"kubernetes.io/projected/9a0467fe-4786-4231-bf52-8a305e9a4f89-kube-api-access-hb8cj\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.272212 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-combined-ca-bundle\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.298535 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-579dc58d97-z59ff"] Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.299971 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.332898 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.333291 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.333407 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.335023 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-pkstn" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.349386 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2f89\" (UniqueName: \"kubernetes.io/projected/622ba42a-ba2c-4296-a192-4342eca1ac9c-kube-api-access-j2f89\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.366611 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-579dc58d97-z59ff"] Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.386582 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-combined-ca-bundle\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.386653 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c95d7c5f-c170-4c14-966f-acdbfa95582d-horizon-secret-key\") pod \"horizon-579dc58d97-z59ff\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.386699 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd28q\" (UniqueName: \"kubernetes.io/projected/c95d7c5f-c170-4c14-966f-acdbfa95582d-kube-api-access-kd28q\") pod \"horizon-579dc58d97-z59ff\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.386719 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-db-sync-config-data\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.386758 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c95d7c5f-c170-4c14-966f-acdbfa95582d-logs\") pod \"horizon-579dc58d97-z59ff\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.386779 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c95d7c5f-c170-4c14-966f-acdbfa95582d-config-data\") pod \"horizon-579dc58d97-z59ff\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.386796 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-scripts\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.386825 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c95d7c5f-c170-4c14-966f-acdbfa95582d-scripts\") pod \"horizon-579dc58d97-z59ff\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.386849 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-config-data\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.386870 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9a0467fe-4786-4231-bf52-8a305e9a4f89-etc-machine-id\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.386891 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb8cj\" (UniqueName: \"kubernetes.io/projected/9a0467fe-4786-4231-bf52-8a305e9a4f89-kube-api-access-hb8cj\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.388159 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9a0467fe-4786-4231-bf52-8a305e9a4f89-etc-machine-id\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.418552 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-combined-ca-bundle\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.445686 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-qt4jn"] Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.448469 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.448694 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.451376 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qt4jn" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.454181 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" event={"ID":"75b7f494-5bdf-48a0-95a4-745655079166","Type":"ContainerStarted","Data":"2a41223bedf76d4fd1fd63bd5a7474603d89c512636bb2a6267cd36446322174"} Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.460704 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb8cj\" (UniqueName: \"kubernetes.io/projected/9a0467fe-4786-4231-bf52-8a305e9a4f89-kube-api-access-hb8cj\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.462831 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-config-data\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.462867 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-db-sync-config-data\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.463584 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-scripts\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.486183 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qt4jn"] Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.487829 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c95d7c5f-c170-4c14-966f-acdbfa95582d-config-data\") pod \"horizon-579dc58d97-z59ff\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.487881 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c95d7c5f-c170-4c14-966f-acdbfa95582d-scripts\") pod \"horizon-579dc58d97-z59ff\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.489168 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c95d7c5f-c170-4c14-966f-acdbfa95582d-horizon-secret-key\") pod \"horizon-579dc58d97-z59ff\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.489223 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd28q\" (UniqueName: \"kubernetes.io/projected/c95d7c5f-c170-4c14-966f-acdbfa95582d-kube-api-access-kd28q\") pod \"horizon-579dc58d97-z59ff\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.489280 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c95d7c5f-c170-4c14-966f-acdbfa95582d-logs\") pod \"horizon-579dc58d97-z59ff\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.489764 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c95d7c5f-c170-4c14-966f-acdbfa95582d-logs\") pod \"horizon-579dc58d97-z59ff\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.490443 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c95d7c5f-c170-4c14-966f-acdbfa95582d-scripts\") pod \"horizon-579dc58d97-z59ff\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.495822 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c95d7c5f-c170-4c14-966f-acdbfa95582d-config-data\") pod \"horizon-579dc58d97-z59ff\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.500481 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.500735 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.505685 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.508437 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c95d7c5f-c170-4c14-966f-acdbfa95582d-horizon-secret-key\") pod \"horizon-579dc58d97-z59ff\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.540908 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-xg5l8" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.594449 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1563c063-cd19-4793-97c0-45ca3e4a3e0c-config\") pod \"neutron-db-sync-qt4jn\" (UID: \"1563c063-cd19-4793-97c0-45ca3e4a3e0c\") " pod="openstack/neutron-db-sync-qt4jn" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.594720 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59ccv\" (UniqueName: \"kubernetes.io/projected/1563c063-cd19-4793-97c0-45ca3e4a3e0c-kube-api-access-59ccv\") pod \"neutron-db-sync-qt4jn\" (UID: \"1563c063-cd19-4793-97c0-45ca3e4a3e0c\") " pod="openstack/neutron-db-sync-qt4jn" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.594762 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1563c063-cd19-4793-97c0-45ca3e4a3e0c-combined-ca-bundle\") pod \"neutron-db-sync-qt4jn\" (UID: \"1563c063-cd19-4793-97c0-45ca3e4a3e0c\") " pod="openstack/neutron-db-sync-qt4jn" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.614532 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.616716 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd28q\" (UniqueName: \"kubernetes.io/projected/c95d7c5f-c170-4c14-966f-acdbfa95582d-kube-api-access-kd28q\") pod \"horizon-579dc58d97-z59ff\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.624811 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.635041 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.635160 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.699373 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-scripts\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.699409 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f789a029-2899-4cb2-8b99-55b77db98b9f-log-httpd\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.699442 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.699466 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1563c063-cd19-4793-97c0-45ca3e4a3e0c-config\") pod \"neutron-db-sync-qt4jn\" (UID: \"1563c063-cd19-4793-97c0-45ca3e4a3e0c\") " pod="openstack/neutron-db-sync-qt4jn" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.699603 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59ccv\" (UniqueName: \"kubernetes.io/projected/1563c063-cd19-4793-97c0-45ca3e4a3e0c-kube-api-access-59ccv\") pod \"neutron-db-sync-qt4jn\" (UID: \"1563c063-cd19-4793-97c0-45ca3e4a3e0c\") " pod="openstack/neutron-db-sync-qt4jn" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.699721 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f789a029-2899-4cb2-8b99-55b77db98b9f-run-httpd\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.699768 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1563c063-cd19-4793-97c0-45ca3e4a3e0c-combined-ca-bundle\") pod \"neutron-db-sync-qt4jn\" (UID: \"1563c063-cd19-4793-97c0-45ca3e4a3e0c\") " pod="openstack/neutron-db-sync-qt4jn" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.699913 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-config-data\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.699988 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.700011 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfxh7\" (UniqueName: \"kubernetes.io/projected/f789a029-2899-4cb2-8b99-55b77db98b9f-kube-api-access-mfxh7\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.704510 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1563c063-cd19-4793-97c0-45ca3e4a3e0c-combined-ca-bundle\") pod \"neutron-db-sync-qt4jn\" (UID: \"1563c063-cd19-4793-97c0-45ca3e4a3e0c\") " pod="openstack/neutron-db-sync-qt4jn" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.713984 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-jhpvr" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.714166 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.716873 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1563c063-cd19-4793-97c0-45ca3e4a3e0c-config\") pod \"neutron-db-sync-qt4jn\" (UID: \"1563c063-cd19-4793-97c0-45ca3e4a3e0c\") " pod="openstack/neutron-db-sync-qt4jn" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.801572 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.801618 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfxh7\" (UniqueName: \"kubernetes.io/projected/f789a029-2899-4cb2-8b99-55b77db98b9f-kube-api-access-mfxh7\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.801676 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-scripts\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.801699 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f789a029-2899-4cb2-8b99-55b77db98b9f-log-httpd\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.801730 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.801771 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f789a029-2899-4cb2-8b99-55b77db98b9f-run-httpd\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.801844 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-config-data\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.802985 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59ccv\" (UniqueName: \"kubernetes.io/projected/1563c063-cd19-4793-97c0-45ca3e4a3e0c-kube-api-access-59ccv\") pod \"neutron-db-sync-qt4jn\" (UID: \"1563c063-cd19-4793-97c0-45ca3e4a3e0c\") " pod="openstack/neutron-db-sync-qt4jn" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.807826 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.808275 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.809528 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f789a029-2899-4cb2-8b99-55b77db98b9f-run-httpd\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.811734 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f789a029-2899-4cb2-8b99-55b77db98b9f-log-httpd\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.827388 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-scripts\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.875624 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfxh7\" (UniqueName: \"kubernetes.io/projected/f789a029-2899-4cb2-8b99-55b77db98b9f-kube-api-access-mfxh7\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.876151 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5dc699bb9-mhr4g"] Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.881032 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.885338 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.892485 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-config-data\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.911087 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.926968 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qt4jn" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.000108 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.013084 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2llwr\" (UniqueName: \"kubernetes.io/projected/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-kube-api-access-2llwr\") pod \"horizon-5dc699bb9-mhr4g\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.014352 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-logs\") pod \"horizon-5dc699bb9-mhr4g\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.014467 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-horizon-secret-key\") pod \"horizon-5dc699bb9-mhr4g\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.014562 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-config-data\") pod \"horizon-5dc699bb9-mhr4g\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.014916 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-scripts\") pod \"horizon-5dc699bb9-mhr4g\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.022058 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-2wbrt"] Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.023091 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2wbrt" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.036327 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-qf2gb" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.036571 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.036651 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5dc699bb9-mhr4g"] Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.057568 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2wbrt"] Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.073371 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-dd7hj"] Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.074468 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.084947 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.085091 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.085183 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2pqk2" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.112755 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-fp8w5"] Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.116227 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-scripts\") pod \"placement-db-sync-dd7hj\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.116263 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-logs\") pod \"placement-db-sync-dd7hj\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.116283 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c39458c0-d624-4ed0-8444-417e479028d2-db-sync-config-data\") pod \"barbican-db-sync-2wbrt\" (UID: \"c39458c0-d624-4ed0-8444-417e479028d2\") " pod="openstack/barbican-db-sync-2wbrt" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.116313 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-logs\") pod \"horizon-5dc699bb9-mhr4g\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.116355 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-horizon-secret-key\") pod \"horizon-5dc699bb9-mhr4g\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.116394 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-config-data\") pod \"horizon-5dc699bb9-mhr4g\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.116429 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q8ts\" (UniqueName: \"kubernetes.io/projected/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-kube-api-access-6q8ts\") pod \"placement-db-sync-dd7hj\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.116462 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h678s\" (UniqueName: \"kubernetes.io/projected/c39458c0-d624-4ed0-8444-417e479028d2-kube-api-access-h678s\") pod \"barbican-db-sync-2wbrt\" (UID: \"c39458c0-d624-4ed0-8444-417e479028d2\") " pod="openstack/barbican-db-sync-2wbrt" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.116484 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-scripts\") pod \"horizon-5dc699bb9-mhr4g\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.116522 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-config-data\") pod \"placement-db-sync-dd7hj\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.116541 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2llwr\" (UniqueName: \"kubernetes.io/projected/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-kube-api-access-2llwr\") pod \"horizon-5dc699bb9-mhr4g\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.116555 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-combined-ca-bundle\") pod \"placement-db-sync-dd7hj\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.116578 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c39458c0-d624-4ed0-8444-417e479028d2-combined-ca-bundle\") pod \"barbican-db-sync-2wbrt\" (UID: \"c39458c0-d624-4ed0-8444-417e479028d2\") " pod="openstack/barbican-db-sync-2wbrt" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.117032 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-logs\") pod \"horizon-5dc699bb9-mhr4g\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.123812 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-scripts\") pod \"horizon-5dc699bb9-mhr4g\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.124828 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-config-data\") pod \"horizon-5dc699bb9-mhr4g\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.149541 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-horizon-secret-key\") pod \"horizon-5dc699bb9-mhr4g\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.168426 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-dd7hj"] Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.169329 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2llwr\" (UniqueName: \"kubernetes.io/projected/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-kube-api-access-2llwr\") pod \"horizon-5dc699bb9-mhr4g\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.220730 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q8ts\" (UniqueName: \"kubernetes.io/projected/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-kube-api-access-6q8ts\") pod \"placement-db-sync-dd7hj\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.220807 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h678s\" (UniqueName: \"kubernetes.io/projected/c39458c0-d624-4ed0-8444-417e479028d2-kube-api-access-h678s\") pod \"barbican-db-sync-2wbrt\" (UID: \"c39458c0-d624-4ed0-8444-417e479028d2\") " pod="openstack/barbican-db-sync-2wbrt" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.220854 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-config-data\") pod \"placement-db-sync-dd7hj\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.220879 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-combined-ca-bundle\") pod \"placement-db-sync-dd7hj\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.220912 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c39458c0-d624-4ed0-8444-417e479028d2-combined-ca-bundle\") pod \"barbican-db-sync-2wbrt\" (UID: \"c39458c0-d624-4ed0-8444-417e479028d2\") " pod="openstack/barbican-db-sync-2wbrt" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.220943 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-scripts\") pod \"placement-db-sync-dd7hj\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.220964 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-logs\") pod \"placement-db-sync-dd7hj\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.220982 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c39458c0-d624-4ed0-8444-417e479028d2-db-sync-config-data\") pod \"barbican-db-sync-2wbrt\" (UID: \"c39458c0-d624-4ed0-8444-417e479028d2\") " pod="openstack/barbican-db-sync-2wbrt" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.226804 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c39458c0-d624-4ed0-8444-417e479028d2-combined-ca-bundle\") pod \"barbican-db-sync-2wbrt\" (UID: \"c39458c0-d624-4ed0-8444-417e479028d2\") " pod="openstack/barbican-db-sync-2wbrt" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.234011 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-combined-ca-bundle\") pod \"placement-db-sync-dd7hj\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.234303 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-logs\") pod \"placement-db-sync-dd7hj\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.236233 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-scripts\") pod \"placement-db-sync-dd7hj\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.244289 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-config-data\") pod \"placement-db-sync-dd7hj\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.255691 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.282212 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q8ts\" (UniqueName: \"kubernetes.io/projected/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-kube-api-access-6q8ts\") pod \"placement-db-sync-dd7hj\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.309611 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-kpbz6"] Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.311387 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.311567 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c39458c0-d624-4ed0-8444-417e479028d2-db-sync-config-data\") pod \"barbican-db-sync-2wbrt\" (UID: \"c39458c0-d624-4ed0-8444-417e479028d2\") " pod="openstack/barbican-db-sync-2wbrt" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.315824 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h678s\" (UniqueName: \"kubernetes.io/projected/c39458c0-d624-4ed0-8444-417e479028d2-kube-api-access-h678s\") pod \"barbican-db-sync-2wbrt\" (UID: \"c39458c0-d624-4ed0-8444-417e479028d2\") " pod="openstack/barbican-db-sync-2wbrt" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.322478 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.322547 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-dns-svc\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.322685 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-config\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.322722 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.322760 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66q5l\" (UniqueName: \"kubernetes.io/projected/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-kube-api-access-66q5l\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.322782 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.327451 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-kpbz6"] Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.368386 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2wbrt" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.425382 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.425450 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-dns-svc\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.429619 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.436469 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-config\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.437027 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.437052 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66q5l\" (UniqueName: \"kubernetes.io/projected/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-kube-api-access-66q5l\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.437108 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.438046 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.438243 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-k7lbh"] Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.438620 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-dns-svc\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.439404 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-config\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.458502 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.461472 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.486043 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66q5l\" (UniqueName: \"kubernetes.io/projected/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-kube-api-access-66q5l\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.522914 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.531346 4593 generic.go:334] "Generic (PLEG): container finished" podID="75b7f494-5bdf-48a0-95a4-745655079166" containerID="ddb63bd3499a1d03d89e38f1924510a054aae77eea34b67608f0f9a0d9d08549" exitCode=0 Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.531510 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" event={"ID":"75b7f494-5bdf-48a0-95a4-745655079166","Type":"ContainerDied","Data":"ddb63bd3499a1d03d89e38f1924510a054aae77eea34b67608f0f9a0d9d08549"} Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.551145 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k7lbh" event={"ID":"b3035bcf-246f-4bad-9c08-bd2188aa4098","Type":"ContainerStarted","Data":"f93093eedad3e691c33b05950a5766a9bfd338de35a4024df89e92e1e6b5e974"} Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.603706 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-fp8w5"] Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.635671 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.829282 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qqbm9"] Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.287961 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qt4jn"] Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.345166 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-579dc58d97-z59ff"] Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.419729 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.439075 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5dc699bb9-mhr4g"] Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.491760 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.577146 4593 generic.go:334] "Generic (PLEG): container finished" podID="622ba42a-ba2c-4296-a192-4342eca1ac9c" containerID="746618261d342c822d0641c0709710a02daa46246cf61c311c0480573cb3deb9" exitCode=0 Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.577213 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-fp8w5" event={"ID":"622ba42a-ba2c-4296-a192-4342eca1ac9c","Type":"ContainerDied","Data":"746618261d342c822d0641c0709710a02daa46246cf61c311c0480573cb3deb9"} Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.577243 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-fp8w5" event={"ID":"622ba42a-ba2c-4296-a192-4342eca1ac9c","Type":"ContainerStarted","Data":"213e53d8a008fd4b685317395335491ab3da62d8c0fe3cb7974f899383c50b68"} Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.583801 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f789a029-2899-4cb2-8b99-55b77db98b9f","Type":"ContainerStarted","Data":"81e674e8a5ccd570da2b45a02c26820c6aece1f8b0def79a73d4b051b04177a1"} Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.593858 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-dns-swift-storage-0\") pod \"75b7f494-5bdf-48a0-95a4-745655079166\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.593983 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-ovsdbserver-nb\") pod \"75b7f494-5bdf-48a0-95a4-745655079166\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.594032 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-dns-svc\") pod \"75b7f494-5bdf-48a0-95a4-745655079166\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.594051 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-ovsdbserver-sb\") pod \"75b7f494-5bdf-48a0-95a4-745655079166\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.594203 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4r8wr\" (UniqueName: \"kubernetes.io/projected/75b7f494-5bdf-48a0-95a4-745655079166-kube-api-access-4r8wr\") pod \"75b7f494-5bdf-48a0-95a4-745655079166\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.594228 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-config\") pod \"75b7f494-5bdf-48a0-95a4-745655079166\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.597193 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qt4jn" event={"ID":"1563c063-cd19-4793-97c0-45ca3e4a3e0c","Type":"ContainerStarted","Data":"e190e45570748f76e4003c2271bb97bb9945d02157bf9978762b8a5417306bd1"} Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.625970 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dc699bb9-mhr4g" event={"ID":"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4","Type":"ContainerStarted","Data":"89376b5d197b69125b3a6abd1f18c2e1c2f09575f848fb7b067180fd45d54911"} Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.639008 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" event={"ID":"75b7f494-5bdf-48a0-95a4-745655079166","Type":"ContainerDied","Data":"2a41223bedf76d4fd1fd63bd5a7474603d89c512636bb2a6267cd36446322174"} Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.639331 4593 scope.go:117] "RemoveContainer" containerID="ddb63bd3499a1d03d89e38f1924510a054aae77eea34b67608f0f9a0d9d08549" Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.640136 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:57 crc kubenswrapper[4593]: W0129 11:15:57.643169 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fe4b5cd_471d_49d2_bf2b_c3a6bac48aa9.slice/crio-b372f3c3ba93038ebc9f4d2fddd539867a9e0d0e69241e478915f67681fd81a1 WatchSource:0}: Error finding container b372f3c3ba93038ebc9f4d2fddd539867a9e0d0e69241e478915f67681fd81a1: Status 404 returned error can't find the container with id b372f3c3ba93038ebc9f4d2fddd539867a9e0d0e69241e478915f67681fd81a1 Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.652262 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-kpbz6"] Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.654501 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75b7f494-5bdf-48a0-95a4-745655079166-kube-api-access-4r8wr" (OuterVolumeSpecName: "kube-api-access-4r8wr") pod "75b7f494-5bdf-48a0-95a4-745655079166" (UID: "75b7f494-5bdf-48a0-95a4-745655079166"). InnerVolumeSpecName "kube-api-access-4r8wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.657595 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qqbm9" event={"ID":"9a0467fe-4786-4231-bf52-8a305e9a4f89","Type":"ContainerStarted","Data":"4a77796204d00631fc171e9b5f3f1adaf76dc3ea5c4251742c0c78ae086cb84b"} Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.698082 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k7lbh" event={"ID":"b3035bcf-246f-4bad-9c08-bd2188aa4098","Type":"ContainerStarted","Data":"d6a963ebfb97713a0a7f5c7f7df33e57f221e22a4c463e45ec8292bcb918f3d4"} Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.699628 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-dd7hj"] Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.700500 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4r8wr\" (UniqueName: \"kubernetes.io/projected/75b7f494-5bdf-48a0-95a4-745655079166-kube-api-access-4r8wr\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.706415 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-579dc58d97-z59ff" event={"ID":"c95d7c5f-c170-4c14-966f-acdbfa95582d","Type":"ContainerStarted","Data":"d50c694222ceb4b9afc6610284cd592d5480cbcc3fe1b8d77d9d22d8a2e395e4"} Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.737813 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-k7lbh" podStartSLOduration=3.737792245 podStartE2EDuration="3.737792245s" podCreationTimestamp="2026-01-29 11:15:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:15:57.732176726 +0000 UTC m=+1023.605210937" watchObservedRunningTime="2026-01-29 11:15:57.737792245 +0000 UTC m=+1023.610826446" Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.782090 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "75b7f494-5bdf-48a0-95a4-745655079166" (UID: "75b7f494-5bdf-48a0-95a4-745655079166"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.795691 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-config" (OuterVolumeSpecName: "config") pod "75b7f494-5bdf-48a0-95a4-745655079166" (UID: "75b7f494-5bdf-48a0-95a4-745655079166"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.801035 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2wbrt"] Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.802260 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.802287 4593 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.832225 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "75b7f494-5bdf-48a0-95a4-745655079166" (UID: "75b7f494-5bdf-48a0-95a4-745655079166"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.859541 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "75b7f494-5bdf-48a0-95a4-745655079166" (UID: "75b7f494-5bdf-48a0-95a4-745655079166"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.861190 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "75b7f494-5bdf-48a0-95a4-745655079166" (UID: "75b7f494-5bdf-48a0-95a4-745655079166"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.904047 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.904077 4593 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.904087 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.138057 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-4tqv8"] Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.183117 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.185485 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-4tqv8"] Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.253217 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-dns-svc\") pod \"622ba42a-ba2c-4296-a192-4342eca1ac9c\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.253288 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-dns-swift-storage-0\") pod \"622ba42a-ba2c-4296-a192-4342eca1ac9c\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.253398 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-config\") pod \"622ba42a-ba2c-4296-a192-4342eca1ac9c\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.253704 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-ovsdbserver-sb\") pod \"622ba42a-ba2c-4296-a192-4342eca1ac9c\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.253734 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2f89\" (UniqueName: \"kubernetes.io/projected/622ba42a-ba2c-4296-a192-4342eca1ac9c-kube-api-access-j2f89\") pod \"622ba42a-ba2c-4296-a192-4342eca1ac9c\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.253795 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-ovsdbserver-nb\") pod \"622ba42a-ba2c-4296-a192-4342eca1ac9c\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.271232 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/622ba42a-ba2c-4296-a192-4342eca1ac9c-kube-api-access-j2f89" (OuterVolumeSpecName: "kube-api-access-j2f89") pod "622ba42a-ba2c-4296-a192-4342eca1ac9c" (UID: "622ba42a-ba2c-4296-a192-4342eca1ac9c"). InnerVolumeSpecName "kube-api-access-j2f89". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.302573 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "622ba42a-ba2c-4296-a192-4342eca1ac9c" (UID: "622ba42a-ba2c-4296-a192-4342eca1ac9c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.303619 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "622ba42a-ba2c-4296-a192-4342eca1ac9c" (UID: "622ba42a-ba2c-4296-a192-4342eca1ac9c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.315564 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-config" (OuterVolumeSpecName: "config") pod "622ba42a-ba2c-4296-a192-4342eca1ac9c" (UID: "622ba42a-ba2c-4296-a192-4342eca1ac9c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.321391 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "622ba42a-ba2c-4296-a192-4342eca1ac9c" (UID: "622ba42a-ba2c-4296-a192-4342eca1ac9c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.326175 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "622ba42a-ba2c-4296-a192-4342eca1ac9c" (UID: "622ba42a-ba2c-4296-a192-4342eca1ac9c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.358068 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.358107 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.358121 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2f89\" (UniqueName: \"kubernetes.io/projected/622ba42a-ba2c-4296-a192-4342eca1ac9c-kube-api-access-j2f89\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.358130 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.358138 4593 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.358145 4593 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.730612 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-fp8w5" event={"ID":"622ba42a-ba2c-4296-a192-4342eca1ac9c","Type":"ContainerDied","Data":"213e53d8a008fd4b685317395335491ab3da62d8c0fe3cb7974f899383c50b68"} Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.730989 4593 scope.go:117] "RemoveContainer" containerID="746618261d342c822d0641c0709710a02daa46246cf61c311c0480573cb3deb9" Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.731129 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.766906 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qt4jn" event={"ID":"1563c063-cd19-4793-97c0-45ca3e4a3e0c","Type":"ContainerStarted","Data":"b6f550864b30cf24b91a51e513d7e513cf9d2ef7137812c6edc720f9813967f9"} Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.785405 4593 generic.go:334] "Generic (PLEG): container finished" podID="8fb458d5-4cf6-41ed-bf24-cc63387a17f8" containerID="b1b07f2017de0e2352ba6afacb58d27c6112126cb7e7975a5838969dfa72ee13" exitCode=0 Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.785494 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" event={"ID":"8fb458d5-4cf6-41ed-bf24-cc63387a17f8","Type":"ContainerDied","Data":"b1b07f2017de0e2352ba6afacb58d27c6112126cb7e7975a5838969dfa72ee13"} Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.785528 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" event={"ID":"8fb458d5-4cf6-41ed-bf24-cc63387a17f8","Type":"ContainerStarted","Data":"27df2f7abd836abf6cd98d3ccb15264008f2c53f8cce156f8a156ba7ca552d82"} Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.814619 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dd7hj" event={"ID":"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9","Type":"ContainerStarted","Data":"b372f3c3ba93038ebc9f4d2fddd539867a9e0d0e69241e478915f67681fd81a1"} Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.832709 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-fp8w5"] Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.845577 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2wbrt" event={"ID":"c39458c0-d624-4ed0-8444-417e479028d2","Type":"ContainerStarted","Data":"48df691aa2eae747d4bfbb1c9e2a92cb2fce2abef2c0b184a7c467030b299d90"} Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.866437 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-fp8w5"] Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.881595 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-qt4jn" podStartSLOduration=3.8815776079999997 podStartE2EDuration="3.881577608s" podCreationTimestamp="2026-01-29 11:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:15:58.810407299 +0000 UTC m=+1024.683441490" watchObservedRunningTime="2026-01-29 11:15:58.881577608 +0000 UTC m=+1024.754611799" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.121466 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="622ba42a-ba2c-4296-a192-4342eca1ac9c" path="/var/lib/kubelet/pods/622ba42a-ba2c-4296-a192-4342eca1ac9c/volumes" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.681702 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75b7f494-5bdf-48a0-95a4-745655079166" path="/var/lib/kubelet/pods/75b7f494-5bdf-48a0-95a4-745655079166/volumes" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.682946 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-579dc58d97-z59ff"] Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.683007 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-54cbb9595c-pxkrk"] Jan 29 11:15:59 crc kubenswrapper[4593]: E0129 11:15:59.684160 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b7f494-5bdf-48a0-95a4-745655079166" containerName="init" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.684181 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b7f494-5bdf-48a0-95a4-745655079166" containerName="init" Jan 29 11:15:59 crc kubenswrapper[4593]: E0129 11:15:59.684219 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="622ba42a-ba2c-4296-a192-4342eca1ac9c" containerName="init" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.684227 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="622ba42a-ba2c-4296-a192-4342eca1ac9c" containerName="init" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.687131 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="75b7f494-5bdf-48a0-95a4-745655079166" containerName="init" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.687169 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="622ba42a-ba2c-4296-a192-4342eca1ac9c" containerName="init" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.688875 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.688916 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-54cbb9595c-pxkrk"] Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.689094 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.802580 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4eb162fe-a643-47e7-b254-d6f394cc10a3-logs\") pod \"horizon-54cbb9595c-pxkrk\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.802655 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4eb162fe-a643-47e7-b254-d6f394cc10a3-horizon-secret-key\") pod \"horizon-54cbb9595c-pxkrk\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.802722 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rm7r\" (UniqueName: \"kubernetes.io/projected/4eb162fe-a643-47e7-b254-d6f394cc10a3-kube-api-access-8rm7r\") pod \"horizon-54cbb9595c-pxkrk\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.802810 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4eb162fe-a643-47e7-b254-d6f394cc10a3-scripts\") pod \"horizon-54cbb9595c-pxkrk\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.802846 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4eb162fe-a643-47e7-b254-d6f394cc10a3-config-data\") pod \"horizon-54cbb9595c-pxkrk\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.904271 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4eb162fe-a643-47e7-b254-d6f394cc10a3-logs\") pod \"horizon-54cbb9595c-pxkrk\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.904317 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4eb162fe-a643-47e7-b254-d6f394cc10a3-horizon-secret-key\") pod \"horizon-54cbb9595c-pxkrk\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.904347 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rm7r\" (UniqueName: \"kubernetes.io/projected/4eb162fe-a643-47e7-b254-d6f394cc10a3-kube-api-access-8rm7r\") pod \"horizon-54cbb9595c-pxkrk\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.904428 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4eb162fe-a643-47e7-b254-d6f394cc10a3-scripts\") pod \"horizon-54cbb9595c-pxkrk\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.904462 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4eb162fe-a643-47e7-b254-d6f394cc10a3-config-data\") pod \"horizon-54cbb9595c-pxkrk\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.905594 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4eb162fe-a643-47e7-b254-d6f394cc10a3-logs\") pod \"horizon-54cbb9595c-pxkrk\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.906975 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4eb162fe-a643-47e7-b254-d6f394cc10a3-scripts\") pod \"horizon-54cbb9595c-pxkrk\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.907876 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4eb162fe-a643-47e7-b254-d6f394cc10a3-config-data\") pod \"horizon-54cbb9595c-pxkrk\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.910144 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4eb162fe-a643-47e7-b254-d6f394cc10a3-horizon-secret-key\") pod \"horizon-54cbb9595c-pxkrk\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.934407 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rm7r\" (UniqueName: \"kubernetes.io/projected/4eb162fe-a643-47e7-b254-d6f394cc10a3-kube-api-access-8rm7r\") pod \"horizon-54cbb9595c-pxkrk\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:16:00 crc kubenswrapper[4593]: I0129 11:16:00.036582 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:16:00 crc kubenswrapper[4593]: I0129 11:16:00.053690 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:16:00 crc kubenswrapper[4593]: I0129 11:16:00.053757 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:16:00 crc kubenswrapper[4593]: I0129 11:16:00.726095 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-54cbb9595c-pxkrk"] Jan 29 11:16:00 crc kubenswrapper[4593]: I0129 11:16:00.887596 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54cbb9595c-pxkrk" event={"ID":"4eb162fe-a643-47e7-b254-d6f394cc10a3","Type":"ContainerStarted","Data":"133a890db821bdd702c17ce64066fb1c09e02bfe05952cb746dcbd9bf0d47a30"} Jan 29 11:16:01 crc kubenswrapper[4593]: I0129 11:16:01.162555 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:16:01 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:16:01 crc kubenswrapper[4593]: > Jan 29 11:16:03 crc kubenswrapper[4593]: I0129 11:16:03.314716 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4q5nh" podUID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" containerName="registry-server" probeResult="failure" output=< Jan 29 11:16:03 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:16:03 crc kubenswrapper[4593]: > Jan 29 11:16:03 crc kubenswrapper[4593]: I0129 11:16:03.946743 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:16:03 crc kubenswrapper[4593]: I0129 11:16:03.947279 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.539414 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5dc699bb9-mhr4g"] Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.581497 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-fbf566cdb-kbm9z"] Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.582823 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.586683 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.612584 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-fbf566cdb-kbm9z"] Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.660190 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-horizon-tls-certs\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.660270 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9761a4f-8669-4e74-9f8e-ed8b9778af11-config-data\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.660327 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-horizon-secret-key\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.660355 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9761a4f-8669-4e74-9f8e-ed8b9778af11-scripts\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.660385 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-combined-ca-bundle\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.660433 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9761a4f-8669-4e74-9f8e-ed8b9778af11-logs\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.660477 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bjjr\" (UniqueName: \"kubernetes.io/projected/b9761a4f-8669-4e74-9f8e-ed8b9778af11-kube-api-access-5bjjr\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.698149 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-54cbb9595c-pxkrk"] Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.728391 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5bdffb4784-5zp8q"] Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.730600 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.743471 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5bdffb4784-5zp8q"] Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.761922 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-combined-ca-bundle\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.761972 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9761a4f-8669-4e74-9f8e-ed8b9778af11-logs\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.762014 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bjjr\" (UniqueName: \"kubernetes.io/projected/b9761a4f-8669-4e74-9f8e-ed8b9778af11-kube-api-access-5bjjr\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.762060 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-horizon-tls-certs\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.762090 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9761a4f-8669-4e74-9f8e-ed8b9778af11-config-data\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.762130 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-horizon-secret-key\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.762153 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9761a4f-8669-4e74-9f8e-ed8b9778af11-scripts\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.763445 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9761a4f-8669-4e74-9f8e-ed8b9778af11-logs\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.763718 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9761a4f-8669-4e74-9f8e-ed8b9778af11-scripts\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.764467 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9761a4f-8669-4e74-9f8e-ed8b9778af11-config-data\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.772268 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-combined-ca-bundle\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.775552 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-horizon-secret-key\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.784117 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-horizon-tls-certs\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.806529 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bjjr\" (UniqueName: \"kubernetes.io/projected/b9761a4f-8669-4e74-9f8e-ed8b9778af11-kube-api-access-5bjjr\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.864748 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-combined-ca-bundle\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.864842 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-horizon-tls-certs\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.864917 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-scripts\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.864976 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-horizon-secret-key\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.865034 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-config-data\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.865137 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8jvx\" (UniqueName: \"kubernetes.io/projected/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-kube-api-access-q8jvx\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.865164 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-logs\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.909082 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.967360 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-logs\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.967732 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-combined-ca-bundle\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.967777 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-horizon-tls-certs\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.967836 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-scripts\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.967886 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-horizon-secret-key\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.967927 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-config-data\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.968013 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8jvx\" (UniqueName: \"kubernetes.io/projected/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-kube-api-access-q8jvx\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.970328 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-logs\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.971414 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-scripts\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.972749 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-config-data\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.973868 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-combined-ca-bundle\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.983649 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-horizon-secret-key\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.984283 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-horizon-tls-certs\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.985239 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8jvx\" (UniqueName: \"kubernetes.io/projected/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-kube-api-access-q8jvx\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:05 crc kubenswrapper[4593]: I0129 11:16:05.048368 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:05 crc kubenswrapper[4593]: I0129 11:16:05.276846 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-fbf566cdb-kbm9z"] Jan 29 11:16:05 crc kubenswrapper[4593]: W0129 11:16:05.280831 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9761a4f_8669_4e74_9f8e_ed8b9778af11.slice/crio-ce4a773b0ca614eb00194b9785007fb66ed555cdb9faf1064f6db03538dbdfaf WatchSource:0}: Error finding container ce4a773b0ca614eb00194b9785007fb66ed555cdb9faf1064f6db03538dbdfaf: Status 404 returned error can't find the container with id ce4a773b0ca614eb00194b9785007fb66ed555cdb9faf1064f6db03538dbdfaf Jan 29 11:16:05 crc kubenswrapper[4593]: I0129 11:16:05.616344 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5bdffb4784-5zp8q"] Jan 29 11:16:05 crc kubenswrapper[4593]: I0129 11:16:05.935092 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf566cdb-kbm9z" event={"ID":"b9761a4f-8669-4e74-9f8e-ed8b9778af11","Type":"ContainerStarted","Data":"ce4a773b0ca614eb00194b9785007fb66ed555cdb9faf1064f6db03538dbdfaf"} Jan 29 11:16:05 crc kubenswrapper[4593]: I0129 11:16:05.936136 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bdffb4784-5zp8q" event={"ID":"be4a01cd-2eb7-48e8-8a7e-eb02f8851188","Type":"ContainerStarted","Data":"d374a3bfab0e23a81102eb51da83b7c8b58f2c94e01933be70521699b15ff521"} Jan 29 11:16:10 crc kubenswrapper[4593]: I0129 11:16:10.987919 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" event={"ID":"8fb458d5-4cf6-41ed-bf24-cc63387a17f8","Type":"ContainerStarted","Data":"d8c466a4721e4e80dcee5d6fc306b00a8e2528371b38488f0d7c1d298edbb2a3"} Jan 29 11:16:10 crc kubenswrapper[4593]: I0129 11:16:10.989780 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:16:11 crc kubenswrapper[4593]: I0129 11:16:11.026945 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" podStartSLOduration=16.026926767 podStartE2EDuration="16.026926767s" podCreationTimestamp="2026-01-29 11:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:16:11.025966501 +0000 UTC m=+1036.899000702" watchObservedRunningTime="2026-01-29 11:16:11.026926767 +0000 UTC m=+1036.899960958" Jan 29 11:16:11 crc kubenswrapper[4593]: I0129 11:16:11.103488 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:16:11 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:16:11 crc kubenswrapper[4593]: > Jan 29 11:16:13 crc kubenswrapper[4593]: I0129 11:16:13.302049 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4q5nh" podUID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" containerName="registry-server" probeResult="failure" output=< Jan 29 11:16:13 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:16:13 crc kubenswrapper[4593]: > Jan 29 11:16:13 crc kubenswrapper[4593]: E0129 11:16:13.717799 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 29 11:16:13 crc kubenswrapper[4593]: E0129 11:16:13.718046 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f4h65fh5ffh5fbhfbh578h5fch58dh595h545hf6h665h557h64ch546h586h56ch75h8h599h558hc8hb5h5bbh65h8bh554h665h54h5b4h5c8hb9q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mfxh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f789a029-2899-4cb2-8b99-55b77db98b9f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:16:16 crc kubenswrapper[4593]: I0129 11:16:16.040754 4593 generic.go:334] "Generic (PLEG): container finished" podID="b3035bcf-246f-4bad-9c08-bd2188aa4098" containerID="d6a963ebfb97713a0a7f5c7f7df33e57f221e22a4c463e45ec8292bcb918f3d4" exitCode=0 Jan 29 11:16:16 crc kubenswrapper[4593]: I0129 11:16:16.041103 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k7lbh" event={"ID":"b3035bcf-246f-4bad-9c08-bd2188aa4098","Type":"ContainerDied","Data":"d6a963ebfb97713a0a7f5c7f7df33e57f221e22a4c463e45ec8292bcb918f3d4"} Jan 29 11:16:16 crc kubenswrapper[4593]: I0129 11:16:16.637826 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:16:16 crc kubenswrapper[4593]: I0129 11:16:16.707430 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lm2dg"] Jan 29 11:16:16 crc kubenswrapper[4593]: I0129 11:16:16.708179 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" podUID="1dc04f8a-c522-49b8-bdf6-59b7edad2d63" containerName="dnsmasq-dns" containerID="cri-o://3463601aba040d487968e25f4e62ebe73e4169690defbbff65cdb06d70d88e14" gracePeriod=10 Jan 29 11:16:18 crc kubenswrapper[4593]: I0129 11:16:18.064479 4593 generic.go:334] "Generic (PLEG): container finished" podID="1dc04f8a-c522-49b8-bdf6-59b7edad2d63" containerID="3463601aba040d487968e25f4e62ebe73e4169690defbbff65cdb06d70d88e14" exitCode=0 Jan 29 11:16:18 crc kubenswrapper[4593]: I0129 11:16:18.064813 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" event={"ID":"1dc04f8a-c522-49b8-bdf6-59b7edad2d63","Type":"ContainerDied","Data":"3463601aba040d487968e25f4e62ebe73e4169690defbbff65cdb06d70d88e14"} Jan 29 11:16:21 crc kubenswrapper[4593]: E0129 11:16:21.135189 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 29 11:16:21 crc kubenswrapper[4593]: E0129 11:16:21.135886 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n597hb6hb9h5fch689h5fbh56h86h5f4hf8h685h546hd7h596h5bbhcch67h56ch588h54ch7bh55bh76h5d5h5b9h584h76h67ch654hfdh699h5d9q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2llwr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5dc699bb9-mhr4g_openstack(8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:16:21 crc kubenswrapper[4593]: I0129 11:16:21.138856 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:16:21 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:16:21 crc kubenswrapper[4593]: > Jan 29 11:16:21 crc kubenswrapper[4593]: E0129 11:16:21.145966 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5dc699bb9-mhr4g" podUID="8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4" Jan 29 11:16:21 crc kubenswrapper[4593]: I0129 11:16:21.510303 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" podUID="1dc04f8a-c522-49b8-bdf6-59b7edad2d63" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Jan 29 11:16:23 crc kubenswrapper[4593]: I0129 11:16:23.339892 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4q5nh" podUID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" containerName="registry-server" probeResult="failure" output=< Jan 29 11:16:23 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:16:23 crc kubenswrapper[4593]: > Jan 29 11:16:24 crc kubenswrapper[4593]: E0129 11:16:24.381869 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 29 11:16:24 crc kubenswrapper[4593]: E0129 11:16:24.382850 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c4h585hb4hd5h5ddh549h568hb9h574h696h555hfdh568h66bh68bh566h58h5d9h5c8h5d7h5dbh556h666h669h5c6h594hdfh579h99h677h54h5bbq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kd28q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-579dc58d97-z59ff_openstack(c95d7c5f-c170-4c14-966f-acdbfa95582d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:16:24 crc kubenswrapper[4593]: E0129 11:16:24.385783 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-579dc58d97-z59ff" podUID="c95d7c5f-c170-4c14-966f-acdbfa95582d" Jan 29 11:16:27 crc kubenswrapper[4593]: I0129 11:16:27.033234 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" podUID="1dc04f8a-c522-49b8-bdf6-59b7edad2d63" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Jan 29 11:16:28 crc kubenswrapper[4593]: E0129 11:16:28.146258 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 29 11:16:28 crc kubenswrapper[4593]: E0129 11:16:28.146453 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6q8ts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-dd7hj_openstack(3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:16:28 crc kubenswrapper[4593]: E0129 11:16:28.148268 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-dd7hj" podUID="3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9" Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.182471 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-579dc58d97-z59ff" event={"ID":"c95d7c5f-c170-4c14-966f-acdbfa95582d","Type":"ContainerDied","Data":"d50c694222ceb4b9afc6610284cd592d5480cbcc3fe1b8d77d9d22d8a2e395e4"} Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.182867 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d50c694222ceb4b9afc6610284cd592d5480cbcc3fe1b8d77d9d22d8a2e395e4" Jan 29 11:16:28 crc kubenswrapper[4593]: E0129 11:16:28.185532 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-dd7hj" podUID="3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9" Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.193904 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.365097 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd28q\" (UniqueName: \"kubernetes.io/projected/c95d7c5f-c170-4c14-966f-acdbfa95582d-kube-api-access-kd28q\") pod \"c95d7c5f-c170-4c14-966f-acdbfa95582d\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.365187 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c95d7c5f-c170-4c14-966f-acdbfa95582d-horizon-secret-key\") pod \"c95d7c5f-c170-4c14-966f-acdbfa95582d\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.365270 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c95d7c5f-c170-4c14-966f-acdbfa95582d-scripts\") pod \"c95d7c5f-c170-4c14-966f-acdbfa95582d\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.365384 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c95d7c5f-c170-4c14-966f-acdbfa95582d-config-data\") pod \"c95d7c5f-c170-4c14-966f-acdbfa95582d\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.365502 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c95d7c5f-c170-4c14-966f-acdbfa95582d-logs\") pod \"c95d7c5f-c170-4c14-966f-acdbfa95582d\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.365815 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c95d7c5f-c170-4c14-966f-acdbfa95582d-scripts" (OuterVolumeSpecName: "scripts") pod "c95d7c5f-c170-4c14-966f-acdbfa95582d" (UID: "c95d7c5f-c170-4c14-966f-acdbfa95582d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.366071 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c95d7c5f-c170-4c14-966f-acdbfa95582d-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.366270 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c95d7c5f-c170-4c14-966f-acdbfa95582d-config-data" (OuterVolumeSpecName: "config-data") pod "c95d7c5f-c170-4c14-966f-acdbfa95582d" (UID: "c95d7c5f-c170-4c14-966f-acdbfa95582d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.367015 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c95d7c5f-c170-4c14-966f-acdbfa95582d-logs" (OuterVolumeSpecName: "logs") pod "c95d7c5f-c170-4c14-966f-acdbfa95582d" (UID: "c95d7c5f-c170-4c14-966f-acdbfa95582d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.371309 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c95d7c5f-c170-4c14-966f-acdbfa95582d-kube-api-access-kd28q" (OuterVolumeSpecName: "kube-api-access-kd28q") pod "c95d7c5f-c170-4c14-966f-acdbfa95582d" (UID: "c95d7c5f-c170-4c14-966f-acdbfa95582d"). InnerVolumeSpecName "kube-api-access-kd28q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.384835 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c95d7c5f-c170-4c14-966f-acdbfa95582d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c95d7c5f-c170-4c14-966f-acdbfa95582d" (UID: "c95d7c5f-c170-4c14-966f-acdbfa95582d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.468029 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c95d7c5f-c170-4c14-966f-acdbfa95582d-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.468066 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kd28q\" (UniqueName: \"kubernetes.io/projected/c95d7c5f-c170-4c14-966f-acdbfa95582d-kube-api-access-kd28q\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.468658 4593 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c95d7c5f-c170-4c14-966f-acdbfa95582d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.468687 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c95d7c5f-c170-4c14-966f-acdbfa95582d-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:29 crc kubenswrapper[4593]: I0129 11:16:29.191516 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:16:29 crc kubenswrapper[4593]: I0129 11:16:29.231593 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-579dc58d97-z59ff"] Jan 29 11:16:29 crc kubenswrapper[4593]: I0129 11:16:29.240810 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-579dc58d97-z59ff"] Jan 29 11:16:31 crc kubenswrapper[4593]: I0129 11:16:31.086941 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c95d7c5f-c170-4c14-966f-acdbfa95582d" path="/var/lib/kubelet/pods/c95d7c5f-c170-4c14-966f-acdbfa95582d/volumes" Jan 29 11:16:31 crc kubenswrapper[4593]: I0129 11:16:31.108416 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:16:31 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:16:31 crc kubenswrapper[4593]: > Jan 29 11:16:31 crc kubenswrapper[4593]: I0129 11:16:31.510592 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" podUID="1dc04f8a-c522-49b8-bdf6-59b7edad2d63" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Jan 29 11:16:31 crc kubenswrapper[4593]: I0129 11:16:31.510738 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:16:32 crc kubenswrapper[4593]: I0129 11:16:32.297359 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:16:32 crc kubenswrapper[4593]: I0129 11:16:32.355052 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:16:32 crc kubenswrapper[4593]: I0129 11:16:32.547468 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4q5nh"] Jan 29 11:16:33 crc kubenswrapper[4593]: I0129 11:16:33.946438 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:16:33 crc kubenswrapper[4593]: I0129 11:16:33.946769 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:16:33 crc kubenswrapper[4593]: I0129 11:16:33.946826 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 11:16:33 crc kubenswrapper[4593]: I0129 11:16:33.947605 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8d1f98c41c3fc4853c4e68bc7e91b4d8483a47efb5351d8fdb5ff5ec5ce9a38d"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:16:33 crc kubenswrapper[4593]: I0129 11:16:33.947696 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://8d1f98c41c3fc4853c4e68bc7e91b4d8483a47efb5351d8fdb5ff5ec5ce9a38d" gracePeriod=600 Jan 29 11:16:34 crc kubenswrapper[4593]: I0129 11:16:34.246168 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="8d1f98c41c3fc4853c4e68bc7e91b4d8483a47efb5351d8fdb5ff5ec5ce9a38d" exitCode=0 Jan 29 11:16:34 crc kubenswrapper[4593]: I0129 11:16:34.246362 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4q5nh" podUID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" containerName="registry-server" containerID="cri-o://40d4746c878ae8363cafa2fcc314b2c7cfd9f6b73acda03b1c6d583170650c6b" gracePeriod=2 Jan 29 11:16:34 crc kubenswrapper[4593]: I0129 11:16:34.246735 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"8d1f98c41c3fc4853c4e68bc7e91b4d8483a47efb5351d8fdb5ff5ec5ce9a38d"} Jan 29 11:16:34 crc kubenswrapper[4593]: I0129 11:16:34.246777 4593 scope.go:117] "RemoveContainer" containerID="61a3ea70115ab5b387eba2a0b23159462567f420ec0f4cfd86c804f4a4ced4d2" Jan 29 11:16:35 crc kubenswrapper[4593]: I0129 11:16:35.258248 4593 generic.go:334] "Generic (PLEG): container finished" podID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" containerID="40d4746c878ae8363cafa2fcc314b2c7cfd9f6b73acda03b1c6d583170650c6b" exitCode=0 Jan 29 11:16:35 crc kubenswrapper[4593]: I0129 11:16:35.258331 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4q5nh" event={"ID":"fef7c251-cfb4-4d34-995d-1994b7a8dbe3","Type":"ContainerDied","Data":"40d4746c878ae8363cafa2fcc314b2c7cfd9f6b73acda03b1c6d583170650c6b"} Jan 29 11:16:36 crc kubenswrapper[4593]: I0129 11:16:36.617421 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" podUID="1dc04f8a-c522-49b8-bdf6-59b7edad2d63" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Jan 29 11:16:38 crc kubenswrapper[4593]: E0129 11:16:38.267813 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 29 11:16:38 crc kubenswrapper[4593]: E0129 11:16:38.268330 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n699h545h68fh6dh5b8h54bh67bh8h5b8hch5b7h6fh8h556h648h557h5f5h85h54bh4h674h589h5bdh598h94h558h654h5d4h67dh58ch5dh56bq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q8jvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5bdffb4784-5zp8q_openstack(be4a01cd-2eb7-48e8-8a7e-eb02f8851188): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:16:38 crc kubenswrapper[4593]: E0129 11:16:38.270350 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5bdffb4784-5zp8q" podUID="be4a01cd-2eb7-48e8-8a7e-eb02f8851188" Jan 29 11:16:38 crc kubenswrapper[4593]: E0129 11:16:38.280315 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 29 11:16:38 crc kubenswrapper[4593]: E0129 11:16:38.280478 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndbhd5hc4h59dh76h94hd4h698h687h668hch66dh56ch7h5bch5d5hdbh655h5d4h584h54fh7fh6dhdch58bh5b4h645h8ch587h644h647h597q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8rm7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-54cbb9595c-pxkrk_openstack(4eb162fe-a643-47e7-b254-d6f394cc10a3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:16:38 crc kubenswrapper[4593]: E0129 11:16:38.283103 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-54cbb9595c-pxkrk" podUID="4eb162fe-a643-47e7-b254-d6f394cc10a3" Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.353699 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.442925 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-fernet-keys\") pod \"b3035bcf-246f-4bad-9c08-bd2188aa4098\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.443033 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-config-data\") pod \"b3035bcf-246f-4bad-9c08-bd2188aa4098\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.443159 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-scripts\") pod \"b3035bcf-246f-4bad-9c08-bd2188aa4098\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.443185 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-combined-ca-bundle\") pod \"b3035bcf-246f-4bad-9c08-bd2188aa4098\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.443303 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjqtw\" (UniqueName: \"kubernetes.io/projected/b3035bcf-246f-4bad-9c08-bd2188aa4098-kube-api-access-tjqtw\") pod \"b3035bcf-246f-4bad-9c08-bd2188aa4098\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.443396 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-credential-keys\") pod \"b3035bcf-246f-4bad-9c08-bd2188aa4098\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.469606 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b3035bcf-246f-4bad-9c08-bd2188aa4098" (UID: "b3035bcf-246f-4bad-9c08-bd2188aa4098"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.469660 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-scripts" (OuterVolumeSpecName: "scripts") pod "b3035bcf-246f-4bad-9c08-bd2188aa4098" (UID: "b3035bcf-246f-4bad-9c08-bd2188aa4098"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.469742 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b3035bcf-246f-4bad-9c08-bd2188aa4098" (UID: "b3035bcf-246f-4bad-9c08-bd2188aa4098"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.501285 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3035bcf-246f-4bad-9c08-bd2188aa4098-kube-api-access-tjqtw" (OuterVolumeSpecName: "kube-api-access-tjqtw") pod "b3035bcf-246f-4bad-9c08-bd2188aa4098" (UID: "b3035bcf-246f-4bad-9c08-bd2188aa4098"). InnerVolumeSpecName "kube-api-access-tjqtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.504925 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-config-data" (OuterVolumeSpecName: "config-data") pod "b3035bcf-246f-4bad-9c08-bd2188aa4098" (UID: "b3035bcf-246f-4bad-9c08-bd2188aa4098"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.511947 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3035bcf-246f-4bad-9c08-bd2188aa4098" (UID: "b3035bcf-246f-4bad-9c08-bd2188aa4098"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.546678 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjqtw\" (UniqueName: \"kubernetes.io/projected/b3035bcf-246f-4bad-9c08-bd2188aa4098-kube-api-access-tjqtw\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.546706 4593 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.546715 4593 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.546723 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.546731 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.546739 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.096732 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:16:39 crc kubenswrapper[4593]: E0129 11:16:39.102999 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5bdffb4784-5zp8q" podUID="be4a01cd-2eb7-48e8-8a7e-eb02f8851188" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.119414 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k7lbh" event={"ID":"b3035bcf-246f-4bad-9c08-bd2188aa4098","Type":"ContainerDied","Data":"f93093eedad3e691c33b05950a5766a9bfd338de35a4024df89e92e1e6b5e974"} Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.119448 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f93093eedad3e691c33b05950a5766a9bfd338de35a4024df89e92e1e6b5e974" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.454741 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-k7lbh"] Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.463219 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-k7lbh"] Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.556467 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-8z7b6"] Jan 29 11:16:39 crc kubenswrapper[4593]: E0129 11:16:39.556991 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3035bcf-246f-4bad-9c08-bd2188aa4098" containerName="keystone-bootstrap" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.557013 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3035bcf-246f-4bad-9c08-bd2188aa4098" containerName="keystone-bootstrap" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.557239 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3035bcf-246f-4bad-9c08-bd2188aa4098" containerName="keystone-bootstrap" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.558182 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.560560 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.560872 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-h76tz" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.561610 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.561777 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.561902 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.590809 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8z7b6"] Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.671831 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-combined-ca-bundle\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.672009 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-config-data\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.672178 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-fernet-keys\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.672285 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-credential-keys\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.672419 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-scripts\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.672535 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl8bb\" (UniqueName: \"kubernetes.io/projected/31f590aa-412a-41ab-92fd-2202c9b456b4-kube-api-access-fl8bb\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.773786 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-combined-ca-bundle\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.773872 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-config-data\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.773930 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-fernet-keys\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.773986 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-credential-keys\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.774048 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-scripts\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.774110 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl8bb\" (UniqueName: \"kubernetes.io/projected/31f590aa-412a-41ab-92fd-2202c9b456b4-kube-api-access-fl8bb\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.780395 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-config-data\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.781144 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-fernet-keys\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.781291 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-combined-ca-bundle\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.788177 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-scripts\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.788722 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-credential-keys\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.791781 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl8bb\" (UniqueName: \"kubernetes.io/projected/31f590aa-412a-41ab-92fd-2202c9b456b4-kube-api-access-fl8bb\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: E0129 11:16:39.837827 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 29 11:16:39 crc kubenswrapper[4593]: E0129 11:16:39.838023 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb5h54bhcdh588h5c8hb4h675h674hb6h566h664hd5h688hbdh68bh5bchf7hf4h578h544h5bch658h698h89h5cdh566h64bh596h555h644h5d5h5f8q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5bjjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-fbf566cdb-kbm9z_openstack(b9761a4f-8669-4e74-9f8e-ed8b9778af11): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:16:39 crc kubenswrapper[4593]: E0129 11:16:39.840235 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" Jan 29 11:16:39 crc kubenswrapper[4593]: E0129 11:16:39.894494 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 29 11:16:39 crc kubenswrapper[4593]: E0129 11:16:39.894668 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h678s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-2wbrt_openstack(c39458c0-d624-4ed0-8444-417e479028d2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:16:39 crc kubenswrapper[4593]: E0129 11:16:39.896017 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-2wbrt" podUID="c39458c0-d624-4ed0-8444-417e479028d2" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.899589 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.920520 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.078930 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-scripts\") pod \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.079193 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-logs\") pod \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.079270 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-config-data\") pod \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.079340 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-horizon-secret-key\") pod \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.079414 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2llwr\" (UniqueName: \"kubernetes.io/projected/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-kube-api-access-2llwr\") pod \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.081064 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-logs" (OuterVolumeSpecName: "logs") pod "8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4" (UID: "8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.081914 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-config-data" (OuterVolumeSpecName: "config-data") pod "8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4" (UID: "8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.083532 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-scripts" (OuterVolumeSpecName: "scripts") pod "8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4" (UID: "8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.086550 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4" (UID: "8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.097254 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-kube-api-access-2llwr" (OuterVolumeSpecName: "kube-api-access-2llwr") pod "8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4" (UID: "8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4"). InnerVolumeSpecName "kube-api-access-2llwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.131410 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dc699bb9-mhr4g" event={"ID":"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4","Type":"ContainerDied","Data":"89376b5d197b69125b3a6abd1f18c2e1c2f09575f848fb7b067180fd45d54911"} Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.131521 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:16:40 crc kubenswrapper[4593]: E0129 11:16:40.140048 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-2wbrt" podUID="c39458c0-d624-4ed0-8444-417e479028d2" Jan 29 11:16:40 crc kubenswrapper[4593]: E0129 11:16:40.140679 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.222257 4593 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.222295 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2llwr\" (UniqueName: \"kubernetes.io/projected/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-kube-api-access-2llwr\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.222310 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.222328 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.222340 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.277696 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5dc699bb9-mhr4g"] Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.285020 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5dc699bb9-mhr4g"] Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.092510 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4" path="/var/lib/kubelet/pods/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4/volumes" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.100358 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3035bcf-246f-4bad-9c08-bd2188aa4098" path="/var/lib/kubelet/pods/b3035bcf-246f-4bad-9c08-bd2188aa4098/volumes" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.109245 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:16:41 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:16:41 crc kubenswrapper[4593]: > Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.396814 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.403058 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.413408 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4eb162fe-a643-47e7-b254-d6f394cc10a3-config-data\") pod \"4eb162fe-a643-47e7-b254-d6f394cc10a3\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.414165 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rm7r\" (UniqueName: \"kubernetes.io/projected/4eb162fe-a643-47e7-b254-d6f394cc10a3-kube-api-access-8rm7r\") pod \"4eb162fe-a643-47e7-b254-d6f394cc10a3\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.414927 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-ovsdbserver-nb\") pod \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.415013 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-dns-svc\") pod \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.415074 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2649\" (UniqueName: \"kubernetes.io/projected/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-kube-api-access-h2649\") pod \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.415138 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4eb162fe-a643-47e7-b254-d6f394cc10a3-horizon-secret-key\") pod \"4eb162fe-a643-47e7-b254-d6f394cc10a3\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.415218 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4eb162fe-a643-47e7-b254-d6f394cc10a3-scripts\") pod \"4eb162fe-a643-47e7-b254-d6f394cc10a3\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.415248 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4eb162fe-a643-47e7-b254-d6f394cc10a3-logs\") pod \"4eb162fe-a643-47e7-b254-d6f394cc10a3\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.415284 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-ovsdbserver-sb\") pod \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.415327 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-config\") pod \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.414104 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eb162fe-a643-47e7-b254-d6f394cc10a3-config-data" (OuterVolumeSpecName: "config-data") pod "4eb162fe-a643-47e7-b254-d6f394cc10a3" (UID: "4eb162fe-a643-47e7-b254-d6f394cc10a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.420403 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eb162fe-a643-47e7-b254-d6f394cc10a3-scripts" (OuterVolumeSpecName: "scripts") pod "4eb162fe-a643-47e7-b254-d6f394cc10a3" (UID: "4eb162fe-a643-47e7-b254-d6f394cc10a3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.432001 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4eb162fe-a643-47e7-b254-d6f394cc10a3-logs" (OuterVolumeSpecName: "logs") pod "4eb162fe-a643-47e7-b254-d6f394cc10a3" (UID: "4eb162fe-a643-47e7-b254-d6f394cc10a3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:16:41 crc kubenswrapper[4593]: E0129 11:16:41.451525 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 29 11:16:41 crc kubenswrapper[4593]: E0129 11:16:41.451677 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hb8cj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-qqbm9_openstack(9a0467fe-4786-4231-bf52-8a305e9a4f89): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.453352 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eb162fe-a643-47e7-b254-d6f394cc10a3-kube-api-access-8rm7r" (OuterVolumeSpecName: "kube-api-access-8rm7r") pod "4eb162fe-a643-47e7-b254-d6f394cc10a3" (UID: "4eb162fe-a643-47e7-b254-d6f394cc10a3"). InnerVolumeSpecName "kube-api-access-8rm7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:16:41 crc kubenswrapper[4593]: E0129 11:16:41.453945 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-qqbm9" podUID="9a0467fe-4786-4231-bf52-8a305e9a4f89" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.458558 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eb162fe-a643-47e7-b254-d6f394cc10a3-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "4eb162fe-a643-47e7-b254-d6f394cc10a3" (UID: "4eb162fe-a643-47e7-b254-d6f394cc10a3"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.462650 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.470809 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-kube-api-access-h2649" (OuterVolumeSpecName: "kube-api-access-h2649") pod "1dc04f8a-c522-49b8-bdf6-59b7edad2d63" (UID: "1dc04f8a-c522-49b8-bdf6-59b7edad2d63"). InnerVolumeSpecName "kube-api-access-h2649". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.519187 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1dc04f8a-c522-49b8-bdf6-59b7edad2d63" (UID: "1dc04f8a-c522-49b8-bdf6-59b7edad2d63"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.522262 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-config" (OuterVolumeSpecName: "config") pod "1dc04f8a-c522-49b8-bdf6-59b7edad2d63" (UID: "1dc04f8a-c522-49b8-bdf6-59b7edad2d63"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.525089 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-utilities\") pod \"fef7c251-cfb4-4d34-995d-1994b7a8dbe3\" (UID: \"fef7c251-cfb4-4d34-995d-1994b7a8dbe3\") " Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.525385 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-catalog-content\") pod \"fef7c251-cfb4-4d34-995d-1994b7a8dbe3\" (UID: \"fef7c251-cfb4-4d34-995d-1994b7a8dbe3\") " Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.526330 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnh7f\" (UniqueName: \"kubernetes.io/projected/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-kube-api-access-mnh7f\") pod \"fef7c251-cfb4-4d34-995d-1994b7a8dbe3\" (UID: \"fef7c251-cfb4-4d34-995d-1994b7a8dbe3\") " Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.527083 4593 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4eb162fe-a643-47e7-b254-d6f394cc10a3-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.527108 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4eb162fe-a643-47e7-b254-d6f394cc10a3-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.527120 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4eb162fe-a643-47e7-b254-d6f394cc10a3-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.527133 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.527143 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4eb162fe-a643-47e7-b254-d6f394cc10a3-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.527165 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rm7r\" (UniqueName: \"kubernetes.io/projected/4eb162fe-a643-47e7-b254-d6f394cc10a3-kube-api-access-8rm7r\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.527177 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.527188 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2649\" (UniqueName: \"kubernetes.io/projected/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-kube-api-access-h2649\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.548625 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-utilities" (OuterVolumeSpecName: "utilities") pod "fef7c251-cfb4-4d34-995d-1994b7a8dbe3" (UID: "fef7c251-cfb4-4d34-995d-1994b7a8dbe3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.554784 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-kube-api-access-mnh7f" (OuterVolumeSpecName: "kube-api-access-mnh7f") pod "fef7c251-cfb4-4d34-995d-1994b7a8dbe3" (UID: "fef7c251-cfb4-4d34-995d-1994b7a8dbe3"). InnerVolumeSpecName "kube-api-access-mnh7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.567741 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1dc04f8a-c522-49b8-bdf6-59b7edad2d63" (UID: "1dc04f8a-c522-49b8-bdf6-59b7edad2d63"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.569682 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1dc04f8a-c522-49b8-bdf6-59b7edad2d63" (UID: "1dc04f8a-c522-49b8-bdf6-59b7edad2d63"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.591012 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fef7c251-cfb4-4d34-995d-1994b7a8dbe3" (UID: "fef7c251-cfb4-4d34-995d-1994b7a8dbe3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.629478 4593 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.629519 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.629536 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.629550 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.629563 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnh7f\" (UniqueName: \"kubernetes.io/projected/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-kube-api-access-mnh7f\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:41 crc kubenswrapper[4593]: E0129 11:16:41.776438 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified" Jan 29 11:16:41 crc kubenswrapper[4593]: E0129 11:16:41.776662 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-notification-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f4h65fh5ffh5fbhfbh578h5fch58dh595h545hf6h665h557h64ch546h586h56ch75h8h599h558hc8hb5h5bbh65h8bh554h665h54h5b4h5c8hb9q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-notification-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mfxh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/notificationhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f789a029-2899-4cb2-8b99-55b77db98b9f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:16:42 crc kubenswrapper[4593]: I0129 11:16:42.155199 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" event={"ID":"1dc04f8a-c522-49b8-bdf6-59b7edad2d63","Type":"ContainerDied","Data":"2b0a11af2b235a2fb8adafd584c05dc53c5aec7086cbb35dcb104dd6b636f9bc"} Jan 29 11:16:42 crc kubenswrapper[4593]: I0129 11:16:42.155486 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:16:42 crc kubenswrapper[4593]: I0129 11:16:42.155583 4593 scope.go:117] "RemoveContainer" containerID="3463601aba040d487968e25f4e62ebe73e4169690defbbff65cdb06d70d88e14" Jan 29 11:16:42 crc kubenswrapper[4593]: I0129 11:16:42.159777 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4q5nh" event={"ID":"fef7c251-cfb4-4d34-995d-1994b7a8dbe3","Type":"ContainerDied","Data":"78dbfe42e92421682419cdaea165d73392eb4f589d0fece85d9b2c89989dd32e"} Jan 29 11:16:42 crc kubenswrapper[4593]: I0129 11:16:42.160853 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:16:42 crc kubenswrapper[4593]: I0129 11:16:42.161756 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:16:42 crc kubenswrapper[4593]: I0129 11:16:42.163856 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54cbb9595c-pxkrk" event={"ID":"4eb162fe-a643-47e7-b254-d6f394cc10a3","Type":"ContainerDied","Data":"133a890db821bdd702c17ce64066fb1c09e02bfe05952cb746dcbd9bf0d47a30"} Jan 29 11:16:42 crc kubenswrapper[4593]: E0129 11:16:42.165335 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-qqbm9" podUID="9a0467fe-4786-4231-bf52-8a305e9a4f89" Jan 29 11:16:42 crc kubenswrapper[4593]: I0129 11:16:42.273224 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lm2dg"] Jan 29 11:16:42 crc kubenswrapper[4593]: I0129 11:16:42.283401 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lm2dg"] Jan 29 11:16:42 crc kubenswrapper[4593]: I0129 11:16:42.297700 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4q5nh"] Jan 29 11:16:42 crc kubenswrapper[4593]: I0129 11:16:42.312495 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4q5nh"] Jan 29 11:16:42 crc kubenswrapper[4593]: I0129 11:16:42.341800 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-54cbb9595c-pxkrk"] Jan 29 11:16:42 crc kubenswrapper[4593]: I0129 11:16:42.354240 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-54cbb9595c-pxkrk"] Jan 29 11:16:42 crc kubenswrapper[4593]: I0129 11:16:42.362082 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8z7b6"] Jan 29 11:16:43 crc kubenswrapper[4593]: I0129 11:16:43.089602 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dc04f8a-c522-49b8-bdf6-59b7edad2d63" path="/var/lib/kubelet/pods/1dc04f8a-c522-49b8-bdf6-59b7edad2d63/volumes" Jan 29 11:16:43 crc kubenswrapper[4593]: I0129 11:16:43.094613 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4eb162fe-a643-47e7-b254-d6f394cc10a3" path="/var/lib/kubelet/pods/4eb162fe-a643-47e7-b254-d6f394cc10a3/volumes" Jan 29 11:16:43 crc kubenswrapper[4593]: I0129 11:16:43.095152 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" path="/var/lib/kubelet/pods/fef7c251-cfb4-4d34-995d-1994b7a8dbe3/volumes" Jan 29 11:16:43 crc kubenswrapper[4593]: I0129 11:16:43.715522 4593 scope.go:117] "RemoveContainer" containerID="3a1884f5780e941a8c795fbe0356484ff14b38b8354e043148a53f7b7fef73d5" Jan 29 11:16:44 crc kubenswrapper[4593]: I0129 11:16:44.181036 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8z7b6" event={"ID":"31f590aa-412a-41ab-92fd-2202c9b456b4","Type":"ContainerStarted","Data":"c46c10c9ff8d263c23d78a62789956ef4717d8d84b1c8aaff15cc76667c7e691"} Jan 29 11:16:48 crc kubenswrapper[4593]: I0129 11:16:48.769996 4593 scope.go:117] "RemoveContainer" containerID="40d4746c878ae8363cafa2fcc314b2c7cfd9f6b73acda03b1c6d583170650c6b" Jan 29 11:16:48 crc kubenswrapper[4593]: I0129 11:16:48.864618 4593 scope.go:117] "RemoveContainer" containerID="26d8db7acae03adbd8a96b95ffa16e626d4c4da2a6d0ab63963a1ab8a16e14e7" Jan 29 11:16:48 crc kubenswrapper[4593]: I0129 11:16:48.948928 4593 scope.go:117] "RemoveContainer" containerID="c6e6f1ac55c53b64f5a8d09aab84fcbf98dc6146a8ab819b2f4a3c9dfdc9a62a" Jan 29 11:16:49 crc kubenswrapper[4593]: I0129 11:16:49.227378 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dd7hj" event={"ID":"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9","Type":"ContainerStarted","Data":"0f2f3f0be6cdd2683b007fbff3ab49a0dd093c0aa8e7bd19c6543357b5ba29b3"} Jan 29 11:16:49 crc kubenswrapper[4593]: I0129 11:16:49.229974 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"000d590ca55db27781027868adeaf4e729be5f85280050b0a93300e017c70002"} Jan 29 11:16:51 crc kubenswrapper[4593]: I0129 11:16:51.115280 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:16:51 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:16:51 crc kubenswrapper[4593]: > Jan 29 11:16:51 crc kubenswrapper[4593]: I0129 11:16:51.270021 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8z7b6" event={"ID":"31f590aa-412a-41ab-92fd-2202c9b456b4","Type":"ContainerStarted","Data":"dc02c784a57ca12374f0aced757e32f43b54151f61a6897de1dd6a96f158aedc"} Jan 29 11:16:51 crc kubenswrapper[4593]: I0129 11:16:51.295183 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-8z7b6" podStartSLOduration=12.295161663 podStartE2EDuration="12.295161663s" podCreationTimestamp="2026-01-29 11:16:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:16:51.282528476 +0000 UTC m=+1077.155562677" watchObservedRunningTime="2026-01-29 11:16:51.295161663 +0000 UTC m=+1077.168195864" Jan 29 11:16:51 crc kubenswrapper[4593]: I0129 11:16:51.329559 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-dd7hj" podStartSLOduration=12.062328605 podStartE2EDuration="56.32954128s" podCreationTimestamp="2026-01-29 11:15:55 +0000 UTC" firstStartedPulling="2026-01-29 11:15:57.667780597 +0000 UTC m=+1023.540814788" lastFinishedPulling="2026-01-29 11:16:41.934993272 +0000 UTC m=+1067.808027463" observedRunningTime="2026-01-29 11:16:51.326171231 +0000 UTC m=+1077.199205462" watchObservedRunningTime="2026-01-29 11:16:51.32954128 +0000 UTC m=+1077.202575471" Jan 29 11:17:01 crc kubenswrapper[4593]: I0129 11:17:01.603688 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:17:01 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:17:01 crc kubenswrapper[4593]: > Jan 29 11:17:09 crc kubenswrapper[4593]: E0129 11:17:09.851376 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 29 11:17:09 crc kubenswrapper[4593]: E0129 11:17:09.852174 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb5h54bhcdh588h5c8hb4h675h674hb6h566h664hd5h688hbdh68bh5bchf7hf4h578h544h5bch658h698h89h5cdh566h64bh596h555h644h5d5h5f8q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5bjjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-fbf566cdb-kbm9z_openstack(b9761a4f-8669-4e74-9f8e-ed8b9778af11): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:17:10 crc kubenswrapper[4593]: E0129 11:17:10.028893 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/sg-core:latest" Jan 29 11:17:10 crc kubenswrapper[4593]: E0129 11:17:10.029051 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/openstack-k8s-operators/sg-core:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:sg-core-conf-yaml,ReadOnly:false,MountPath:/etc/sg-core.conf.yaml,SubPath:sg-core.conf.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mfxh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f789a029-2899-4cb2-8b99-55b77db98b9f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:17:10 crc kubenswrapper[4593]: E0129 11:17:10.216602 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" Jan 29 11:17:10 crc kubenswrapper[4593]: I0129 11:17:10.443394 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf566cdb-kbm9z" event={"ID":"b9761a4f-8669-4e74-9f8e-ed8b9778af11","Type":"ContainerStarted","Data":"a15a1a862b6057b76f95edeb2bb41d937e5e017b829f9f7c6c63b71068d74996"} Jan 29 11:17:10 crc kubenswrapper[4593]: I0129 11:17:10.461588 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bdffb4784-5zp8q" event={"ID":"be4a01cd-2eb7-48e8-8a7e-eb02f8851188","Type":"ContainerStarted","Data":"4ea44b885ada361be4b5f0a32e896db941b82f262b405096f4aa89cb728d6d62"} Jan 29 11:17:10 crc kubenswrapper[4593]: I0129 11:17:10.469303 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2wbrt" event={"ID":"c39458c0-d624-4ed0-8444-417e479028d2","Type":"ContainerStarted","Data":"99ff344d90d5bdd893d1e77e101cd6e34638c02acf7127cecbfee61fab7d69ad"} Jan 29 11:17:10 crc kubenswrapper[4593]: I0129 11:17:10.509439 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-2wbrt" podStartSLOduration=3.203243207 podStartE2EDuration="1m15.50940547s" podCreationTimestamp="2026-01-29 11:15:55 +0000 UTC" firstStartedPulling="2026-01-29 11:15:57.827011937 +0000 UTC m=+1023.700046128" lastFinishedPulling="2026-01-29 11:17:10.1331742 +0000 UTC m=+1096.006208391" observedRunningTime="2026-01-29 11:17:10.499407183 +0000 UTC m=+1096.372441374" watchObservedRunningTime="2026-01-29 11:17:10.50940547 +0000 UTC m=+1096.382439661" Jan 29 11:17:11 crc kubenswrapper[4593]: I0129 11:17:11.113541 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:17:11 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:17:11 crc kubenswrapper[4593]: > Jan 29 11:17:11 crc kubenswrapper[4593]: I0129 11:17:11.485243 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf566cdb-kbm9z" event={"ID":"b9761a4f-8669-4e74-9f8e-ed8b9778af11","Type":"ContainerStarted","Data":"79e5fad4ce8a136539fe157f20b007cd9dda01813dc5bd26b79f98167ce8f3c8"} Jan 29 11:17:11 crc kubenswrapper[4593]: I0129 11:17:11.489303 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bdffb4784-5zp8q" event={"ID":"be4a01cd-2eb7-48e8-8a7e-eb02f8851188","Type":"ContainerStarted","Data":"948ff5eda4c7a4e3a5023888e59c0f30a788f7ad09bc8aba86ab19e010a4eeb1"} Jan 29 11:17:11 crc kubenswrapper[4593]: I0129 11:17:11.521965 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-fbf566cdb-kbm9z" podStartSLOduration=-9223371969.332834 podStartE2EDuration="1m7.52194184s" podCreationTimestamp="2026-01-29 11:16:04 +0000 UTC" firstStartedPulling="2026-01-29 11:16:05.282293227 +0000 UTC m=+1031.155327418" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:17:11.517184983 +0000 UTC m=+1097.390219184" watchObservedRunningTime="2026-01-29 11:17:11.52194184 +0000 UTC m=+1097.394976031" Jan 29 11:17:11 crc kubenswrapper[4593]: I0129 11:17:11.552436 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5bdffb4784-5zp8q" podStartSLOduration=3.303998747 podStartE2EDuration="1m7.552411513s" podCreationTimestamp="2026-01-29 11:16:04 +0000 UTC" firstStartedPulling="2026-01-29 11:16:05.631917367 +0000 UTC m=+1031.504951558" lastFinishedPulling="2026-01-29 11:17:09.880330133 +0000 UTC m=+1095.753364324" observedRunningTime="2026-01-29 11:17:11.544553643 +0000 UTC m=+1097.417587834" watchObservedRunningTime="2026-01-29 11:17:11.552411513 +0000 UTC m=+1097.425445704" Jan 29 11:17:12 crc kubenswrapper[4593]: I0129 11:17:12.525395 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qqbm9" event={"ID":"9a0467fe-4786-4231-bf52-8a305e9a4f89","Type":"ContainerStarted","Data":"06197cae1e3adecc87ccca3058356e85b083a773c3ebd8eeabc6c5475d59dd8e"} Jan 29 11:17:12 crc kubenswrapper[4593]: I0129 11:17:12.559682 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-qqbm9" podStartSLOduration=4.284647394 podStartE2EDuration="1m17.559661512s" podCreationTimestamp="2026-01-29 11:15:55 +0000 UTC" firstStartedPulling="2026-01-29 11:15:56.857936676 +0000 UTC m=+1022.730970867" lastFinishedPulling="2026-01-29 11:17:10.132950784 +0000 UTC m=+1096.005984985" observedRunningTime="2026-01-29 11:17:12.552994884 +0000 UTC m=+1098.426029075" watchObservedRunningTime="2026-01-29 11:17:12.559661512 +0000 UTC m=+1098.432695703" Jan 29 11:17:13 crc kubenswrapper[4593]: I0129 11:17:13.650259 4593 generic.go:334] "Generic (PLEG): container finished" podID="3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9" containerID="0f2f3f0be6cdd2683b007fbff3ab49a0dd093c0aa8e7bd19c6543357b5ba29b3" exitCode=0 Jan 29 11:17:13 crc kubenswrapper[4593]: I0129 11:17:13.650665 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dd7hj" event={"ID":"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9","Type":"ContainerDied","Data":"0f2f3f0be6cdd2683b007fbff3ab49a0dd093c0aa8e7bd19c6543357b5ba29b3"} Jan 29 11:17:13 crc kubenswrapper[4593]: I0129 11:17:13.657908 4593 generic.go:334] "Generic (PLEG): container finished" podID="31f590aa-412a-41ab-92fd-2202c9b456b4" containerID="dc02c784a57ca12374f0aced757e32f43b54151f61a6897de1dd6a96f158aedc" exitCode=0 Jan 29 11:17:13 crc kubenswrapper[4593]: I0129 11:17:13.657979 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8z7b6" event={"ID":"31f590aa-412a-41ab-92fd-2202c9b456b4","Type":"ContainerDied","Data":"dc02c784a57ca12374f0aced757e32f43b54151f61a6897de1dd6a96f158aedc"} Jan 29 11:17:14 crc kubenswrapper[4593]: I0129 11:17:14.668115 4593 generic.go:334] "Generic (PLEG): container finished" podID="a6bbbb39-f79c-4647-976b-6225ac21e63b" containerID="6029f6551650b545bead0d4f37b1f5f3a81f76cf7f6f139456a1354a00bcaf99" exitCode=0 Jan 29 11:17:14 crc kubenswrapper[4593]: I0129 11:17:14.669161 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-db54x" event={"ID":"a6bbbb39-f79c-4647-976b-6225ac21e63b","Type":"ContainerDied","Data":"6029f6551650b545bead0d4f37b1f5f3a81f76cf7f6f139456a1354a00bcaf99"} Jan 29 11:17:14 crc kubenswrapper[4593]: I0129 11:17:14.909961 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:17:14 crc kubenswrapper[4593]: I0129 11:17:14.910317 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.060406 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.060772 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.069761 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.190252 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-credential-keys\") pod \"31f590aa-412a-41ab-92fd-2202c9b456b4\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.190458 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-config-data\") pod \"31f590aa-412a-41ab-92fd-2202c9b456b4\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.190501 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-fernet-keys\") pod \"31f590aa-412a-41ab-92fd-2202c9b456b4\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.190538 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-scripts\") pod \"31f590aa-412a-41ab-92fd-2202c9b456b4\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.190598 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-combined-ca-bundle\") pod \"31f590aa-412a-41ab-92fd-2202c9b456b4\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.190760 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl8bb\" (UniqueName: \"kubernetes.io/projected/31f590aa-412a-41ab-92fd-2202c9b456b4-kube-api-access-fl8bb\") pod \"31f590aa-412a-41ab-92fd-2202c9b456b4\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.200840 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31f590aa-412a-41ab-92fd-2202c9b456b4-kube-api-access-fl8bb" (OuterVolumeSpecName: "kube-api-access-fl8bb") pod "31f590aa-412a-41ab-92fd-2202c9b456b4" (UID: "31f590aa-412a-41ab-92fd-2202c9b456b4"). InnerVolumeSpecName "kube-api-access-fl8bb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.202562 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-scripts" (OuterVolumeSpecName: "scripts") pod "31f590aa-412a-41ab-92fd-2202c9b456b4" (UID: "31f590aa-412a-41ab-92fd-2202c9b456b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.210133 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "31f590aa-412a-41ab-92fd-2202c9b456b4" (UID: "31f590aa-412a-41ab-92fd-2202c9b456b4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.214145 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "31f590aa-412a-41ab-92fd-2202c9b456b4" (UID: "31f590aa-412a-41ab-92fd-2202c9b456b4"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.235740 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31f590aa-412a-41ab-92fd-2202c9b456b4" (UID: "31f590aa-412a-41ab-92fd-2202c9b456b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.241386 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-config-data" (OuterVolumeSpecName: "config-data") pod "31f590aa-412a-41ab-92fd-2202c9b456b4" (UID: "31f590aa-412a-41ab-92fd-2202c9b456b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.249929 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dd7hj" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.292979 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-logs\") pod \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.293080 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-config-data\") pod \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.293184 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-scripts\") pod \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.293219 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6q8ts\" (UniqueName: \"kubernetes.io/projected/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-kube-api-access-6q8ts\") pod \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.293276 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-combined-ca-bundle\") pod \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.293896 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fl8bb\" (UniqueName: \"kubernetes.io/projected/31f590aa-412a-41ab-92fd-2202c9b456b4-kube-api-access-fl8bb\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.293924 4593 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.293935 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.293946 4593 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.293973 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.293985 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.293982 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-logs" (OuterVolumeSpecName: "logs") pod "3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9" (UID: "3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.305211 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-kube-api-access-6q8ts" (OuterVolumeSpecName: "kube-api-access-6q8ts") pod "3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9" (UID: "3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9"). InnerVolumeSpecName "kube-api-access-6q8ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.308680 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-scripts" (OuterVolumeSpecName: "scripts") pod "3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9" (UID: "3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.320203 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-config-data" (OuterVolumeSpecName: "config-data") pod "3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9" (UID: "3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.333681 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9" (UID: "3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.434187 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.434231 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.434244 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.434262 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6q8ts\" (UniqueName: \"kubernetes.io/projected/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-kube-api-access-6q8ts\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.434276 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.685050 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dd7hj" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.685028 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dd7hj" event={"ID":"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9","Type":"ContainerDied","Data":"b372f3c3ba93038ebc9f4d2fddd539867a9e0d0e69241e478915f67681fd81a1"} Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.685903 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b372f3c3ba93038ebc9f4d2fddd539867a9e0d0e69241e478915f67681fd81a1" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.693031 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.693110 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8z7b6" event={"ID":"31f590aa-412a-41ab-92fd-2202c9b456b4","Type":"ContainerDied","Data":"c46c10c9ff8d263c23d78a62789956ef4717d8d84b1c8aaff15cc76667c7e691"} Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.693150 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c46c10c9ff8d263c23d78a62789956ef4717d8d84b1c8aaff15cc76667c7e691" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.926640 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-669db997bd-hhbcc"] Jan 29 11:17:15 crc kubenswrapper[4593]: E0129 11:17:15.927078 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc04f8a-c522-49b8-bdf6-59b7edad2d63" containerName="init" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.927102 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc04f8a-c522-49b8-bdf6-59b7edad2d63" containerName="init" Jan 29 11:17:15 crc kubenswrapper[4593]: E0129 11:17:15.927121 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" containerName="extract-utilities" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.927129 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" containerName="extract-utilities" Jan 29 11:17:15 crc kubenswrapper[4593]: E0129 11:17:15.927155 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc04f8a-c522-49b8-bdf6-59b7edad2d63" containerName="dnsmasq-dns" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.927163 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc04f8a-c522-49b8-bdf6-59b7edad2d63" containerName="dnsmasq-dns" Jan 29 11:17:15 crc kubenswrapper[4593]: E0129 11:17:15.927176 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9" containerName="placement-db-sync" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.927184 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9" containerName="placement-db-sync" Jan 29 11:17:15 crc kubenswrapper[4593]: E0129 11:17:15.927193 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31f590aa-412a-41ab-92fd-2202c9b456b4" containerName="keystone-bootstrap" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.927201 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="31f590aa-412a-41ab-92fd-2202c9b456b4" containerName="keystone-bootstrap" Jan 29 11:17:15 crc kubenswrapper[4593]: E0129 11:17:15.927216 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" containerName="registry-server" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.927223 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" containerName="registry-server" Jan 29 11:17:15 crc kubenswrapper[4593]: E0129 11:17:15.927232 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" containerName="extract-content" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.927241 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" containerName="extract-content" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.927452 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dc04f8a-c522-49b8-bdf6-59b7edad2d63" containerName="dnsmasq-dns" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.927469 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" containerName="registry-server" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.927485 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="31f590aa-412a-41ab-92fd-2202c9b456b4" containerName="keystone-bootstrap" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.927506 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9" containerName="placement-db-sync" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.928569 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.936451 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.936745 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2pqk2" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.936861 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.936973 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.937069 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.949051 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7f96568f6f-lfzv9"] Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.950698 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.962069 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.962767 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.964342 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.966159 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-h76tz" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.966419 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.966576 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.001096 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-669db997bd-hhbcc"] Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.002896 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7f96568f6f-lfzv9"] Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.050961 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-public-tls-certs\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.051049 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wk99\" (UniqueName: \"kubernetes.io/projected/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-kube-api-access-4wk99\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.051070 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-internal-tls-certs\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.051200 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb55m\" (UniqueName: \"kubernetes.io/projected/dcf8c6b2-659d-4fbb-82ef-d9749443f647-kube-api-access-hb55m\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.051227 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-scripts\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.051247 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcf8c6b2-659d-4fbb-82ef-d9749443f647-logs\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.051264 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-combined-ca-bundle\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.051421 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-fernet-keys\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.051499 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-config-data\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.051541 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-combined-ca-bundle\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.051622 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-public-tls-certs\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.051839 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-internal-tls-certs\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.051919 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-scripts\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.051977 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-credential-keys\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.052011 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-config-data\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.153532 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-scripts\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.185948 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcf8c6b2-659d-4fbb-82ef-d9749443f647-logs\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.185991 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-combined-ca-bundle\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.186077 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-fernet-keys\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.186132 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-config-data\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.186169 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-combined-ca-bundle\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.186248 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-public-tls-certs\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.186336 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-internal-tls-certs\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.186366 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-scripts\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.186417 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-credential-keys\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.186446 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-config-data\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.186487 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-public-tls-certs\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.186734 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wk99\" (UniqueName: \"kubernetes.io/projected/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-kube-api-access-4wk99\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.186770 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-internal-tls-certs\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.191994 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb55m\" (UniqueName: \"kubernetes.io/projected/dcf8c6b2-659d-4fbb-82ef-d9749443f647-kube-api-access-hb55m\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.159275 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-scripts\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.194064 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcf8c6b2-659d-4fbb-82ef-d9749443f647-logs\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.200573 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-internal-tls-certs\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.200963 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-public-tls-certs\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.205238 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-public-tls-certs\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.219207 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-combined-ca-bundle\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.219414 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-fernet-keys\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.220938 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-scripts\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.221491 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-combined-ca-bundle\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.221773 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-config-data\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.222071 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-credential-keys\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.224178 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-internal-tls-certs\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.225778 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb55m\" (UniqueName: \"kubernetes.io/projected/dcf8c6b2-659d-4fbb-82ef-d9749443f647-kube-api-access-hb55m\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.232356 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-config-data\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.244419 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wk99\" (UniqueName: \"kubernetes.io/projected/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-kube-api-access-4wk99\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.245497 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.275716 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.463644 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-869645f564-n6fhc"] Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.465431 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.518781 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-869645f564-n6fhc"] Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.588070 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-db54x" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.609778 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-logs\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.609812 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-combined-ca-bundle\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.609859 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-config-data\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.609878 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-internal-tls-certs\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.609901 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-scripts\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.609920 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-public-tls-certs\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.609940 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djlq5\" (UniqueName: \"kubernetes.io/projected/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-kube-api-access-djlq5\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.712413 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-config-data\") pod \"a6bbbb39-f79c-4647-976b-6225ac21e63b\" (UID: \"a6bbbb39-f79c-4647-976b-6225ac21e63b\") " Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.712452 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-db-sync-config-data\") pod \"a6bbbb39-f79c-4647-976b-6225ac21e63b\" (UID: \"a6bbbb39-f79c-4647-976b-6225ac21e63b\") " Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.712524 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4lrf\" (UniqueName: \"kubernetes.io/projected/a6bbbb39-f79c-4647-976b-6225ac21e63b-kube-api-access-z4lrf\") pod \"a6bbbb39-f79c-4647-976b-6225ac21e63b\" (UID: \"a6bbbb39-f79c-4647-976b-6225ac21e63b\") " Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.712959 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-combined-ca-bundle\") pod \"a6bbbb39-f79c-4647-976b-6225ac21e63b\" (UID: \"a6bbbb39-f79c-4647-976b-6225ac21e63b\") " Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.713228 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-scripts\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.713260 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-public-tls-certs\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.713280 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djlq5\" (UniqueName: \"kubernetes.io/projected/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-kube-api-access-djlq5\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.713390 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-logs\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.713411 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-combined-ca-bundle\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.713463 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-config-data\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.713485 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-internal-tls-certs\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.715097 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-logs\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.724775 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-combined-ca-bundle\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.730468 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6bbbb39-f79c-4647-976b-6225ac21e63b-kube-api-access-z4lrf" (OuterVolumeSpecName: "kube-api-access-z4lrf") pod "a6bbbb39-f79c-4647-976b-6225ac21e63b" (UID: "a6bbbb39-f79c-4647-976b-6225ac21e63b"). InnerVolumeSpecName "kube-api-access-z4lrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.730911 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-db54x" event={"ID":"a6bbbb39-f79c-4647-976b-6225ac21e63b","Type":"ContainerDied","Data":"75cc780a00b24f186282ea44e59ad68ac3ba85606bfd4c75fd53ab81ca596e59"} Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.730959 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75cc780a00b24f186282ea44e59ad68ac3ba85606bfd4c75fd53ab81ca596e59" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.731023 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-db54x" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.731703 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-config-data\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.735596 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djlq5\" (UniqueName: \"kubernetes.io/projected/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-kube-api-access-djlq5\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.741188 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a6bbbb39-f79c-4647-976b-6225ac21e63b" (UID: "a6bbbb39-f79c-4647-976b-6225ac21e63b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.742692 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-internal-tls-certs\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.745391 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-scripts\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.748333 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-public-tls-certs\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.780169 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6bbbb39-f79c-4647-976b-6225ac21e63b" (UID: "a6bbbb39-f79c-4647-976b-6225ac21e63b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.788734 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.830367 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.830719 4593 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.830741 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4lrf\" (UniqueName: \"kubernetes.io/projected/a6bbbb39-f79c-4647-976b-6225ac21e63b-kube-api-access-z4lrf\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.836817 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-config-data" (OuterVolumeSpecName: "config-data") pod "a6bbbb39-f79c-4647-976b-6225ac21e63b" (UID: "a6bbbb39-f79c-4647-976b-6225ac21e63b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.932748 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.136057 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7f96568f6f-lfzv9"] Jan 29 11:17:17 crc kubenswrapper[4593]: W0129 11:17:17.139451 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2e767a2_2e4c_4a41_995f_1f0ca9248d1a.slice/crio-be8bf9e9bc37b34c7dffbadfdc06cede36aee25b1c41ed92602331a8e9090d90 WatchSource:0}: Error finding container be8bf9e9bc37b34c7dffbadfdc06cede36aee25b1c41ed92602331a8e9090d90: Status 404 returned error can't find the container with id be8bf9e9bc37b34c7dffbadfdc06cede36aee25b1c41ed92602331a8e9090d90 Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.206126 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-669db997bd-hhbcc"] Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.409411 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zbcrq"] Jan 29 11:17:17 crc kubenswrapper[4593]: E0129 11:17:17.415234 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6bbbb39-f79c-4647-976b-6225ac21e63b" containerName="glance-db-sync" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.415268 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6bbbb39-f79c-4647-976b-6225ac21e63b" containerName="glance-db-sync" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.415517 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6bbbb39-f79c-4647-976b-6225ac21e63b" containerName="glance-db-sync" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.416354 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.451168 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zbcrq"] Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.550679 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.550726 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.550772 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrv9n\" (UniqueName: \"kubernetes.io/projected/c7926455-1b18-4907-831f-c8949c999c3e-kube-api-access-xrv9n\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.550800 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.550859 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.550879 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-config\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.565479 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-869645f564-n6fhc"] Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.674610 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.674694 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-config\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.677887 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.678342 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.679409 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.679923 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrv9n\" (UniqueName: \"kubernetes.io/projected/c7926455-1b18-4907-831f-c8949c999c3e-kube-api-access-xrv9n\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.680525 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.682230 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.683459 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.679211 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-config\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.707450 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrv9n\" (UniqueName: \"kubernetes.io/projected/c7926455-1b18-4907-831f-c8949c999c3e-kube-api-access-xrv9n\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.708660 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.744326 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.749127 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7f96568f6f-lfzv9" event={"ID":"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a","Type":"ContainerStarted","Data":"be8bf9e9bc37b34c7dffbadfdc06cede36aee25b1c41ed92602331a8e9090d90"} Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.767875 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-669db997bd-hhbcc" event={"ID":"dcf8c6b2-659d-4fbb-82ef-d9749443f647","Type":"ContainerStarted","Data":"32fdfc7881c963abaad68073c4d49c25e3c8cc05f9fcc814488ad8238d96326b"} Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.775528 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-869645f564-n6fhc" event={"ID":"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747","Type":"ContainerStarted","Data":"1c3e9e98f800409a9823c6a497606c5854e95eee895be2ee59cd726addc960dc"} Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.231970 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.233965 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.242023 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.242761 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.255790 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-lfv28" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.287044 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.415210 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.415335 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50c0ed30-282a-446b-b0cc-f201e07cd2b5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.415458 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvcks\" (UniqueName: \"kubernetes.io/projected/50c0ed30-282a-446b-b0cc-f201e07cd2b5-kube-api-access-nvcks\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.415504 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50c0ed30-282a-446b-b0cc-f201e07cd2b5-logs\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.415566 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-scripts\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.415593 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.415660 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-config-data\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.528100 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvcks\" (UniqueName: \"kubernetes.io/projected/50c0ed30-282a-446b-b0cc-f201e07cd2b5-kube-api-access-nvcks\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.528155 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50c0ed30-282a-446b-b0cc-f201e07cd2b5-logs\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.528184 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-scripts\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.528201 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.528227 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-config-data\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.528279 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.528318 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50c0ed30-282a-446b-b0cc-f201e07cd2b5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.528882 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50c0ed30-282a-446b-b0cc-f201e07cd2b5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.529187 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.529200 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50c0ed30-282a-446b-b0cc-f201e07cd2b5-logs\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.534702 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.535171 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-scripts\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.534625 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-config-data\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.580191 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvcks\" (UniqueName: \"kubernetes.io/projected/50c0ed30-282a-446b-b0cc-f201e07cd2b5-kube-api-access-nvcks\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.583828 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.590953 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zbcrq"] Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.613099 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.636245 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.641885 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.649543 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.657363 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.797061 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-869645f564-n6fhc" event={"ID":"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747","Type":"ContainerStarted","Data":"594000ead793855509f5118738c4f17be545b8f782da5155ae07305547f20250"} Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.803892 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7f96568f6f-lfzv9" event={"ID":"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a","Type":"ContainerStarted","Data":"74f0241ce60422f1a94e55be9dd85f880e1040fce58ac0dc98969f9d916be9bb"} Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.805063 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.810169 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-669db997bd-hhbcc" event={"ID":"dcf8c6b2-659d-4fbb-82ef-d9749443f647","Type":"ContainerStarted","Data":"fca879370bdf54a12b3a105098148973a13eddb0bbbb835f4a9653bb9e65ca80"} Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.810210 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-669db997bd-hhbcc" event={"ID":"dcf8c6b2-659d-4fbb-82ef-d9749443f647","Type":"ContainerStarted","Data":"b2737a73be5d76fb8f211f8bf7e6f7f5d5df136a1e001d613ced73be513cce7c"} Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.811093 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.811125 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.829607 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7f96568f6f-lfzv9" podStartSLOduration=3.82958187 podStartE2EDuration="3.82958187s" podCreationTimestamp="2026-01-29 11:17:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:17:18.821375211 +0000 UTC m=+1104.694409402" watchObservedRunningTime="2026-01-29 11:17:18.82958187 +0000 UTC m=+1104.702616071" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.832799 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdda5015-0c28-4ab0-befd-715cb8a987e3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.832880 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpxq8\" (UniqueName: \"kubernetes.io/projected/fdda5015-0c28-4ab0-befd-715cb8a987e3-kube-api-access-dpxq8\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.832930 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.833025 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.833116 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.833158 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdda5015-0c28-4ab0-befd-715cb8a987e3-logs\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.833181 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.857699 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-669db997bd-hhbcc" podStartSLOduration=3.857679569 podStartE2EDuration="3.857679569s" podCreationTimestamp="2026-01-29 11:17:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:17:18.843592044 +0000 UTC m=+1104.716626255" watchObservedRunningTime="2026-01-29 11:17:18.857679569 +0000 UTC m=+1104.730713760" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.937155 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.937244 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdda5015-0c28-4ab0-befd-715cb8a987e3-logs\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.937283 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.937310 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdda5015-0c28-4ab0-befd-715cb8a987e3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.937342 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpxq8\" (UniqueName: \"kubernetes.io/projected/fdda5015-0c28-4ab0-befd-715cb8a987e3-kube-api-access-dpxq8\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.937396 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.937559 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.938701 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.939270 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdda5015-0c28-4ab0-befd-715cb8a987e3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.942447 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdda5015-0c28-4ab0-befd-715cb8a987e3-logs\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.942765 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.953263 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.955035 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.966036 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpxq8\" (UniqueName: \"kubernetes.io/projected/fdda5015-0c28-4ab0-befd-715cb8a987e3-kube-api-access-dpxq8\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.984562 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:19 crc kubenswrapper[4593]: I0129 11:17:19.270802 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 11:17:20 crc kubenswrapper[4593]: I0129 11:17:20.381061 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:17:20 crc kubenswrapper[4593]: I0129 11:17:20.503012 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:17:21 crc kubenswrapper[4593]: I0129 11:17:21.121934 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:17:21 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:17:21 crc kubenswrapper[4593]: > Jan 29 11:17:22 crc kubenswrapper[4593]: I0129 11:17:22.852732 4593 generic.go:334] "Generic (PLEG): container finished" podID="c39458c0-d624-4ed0-8444-417e479028d2" containerID="99ff344d90d5bdd893d1e77e101cd6e34638c02acf7127cecbfee61fab7d69ad" exitCode=0 Jan 29 11:17:22 crc kubenswrapper[4593]: I0129 11:17:22.852743 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2wbrt" event={"ID":"c39458c0-d624-4ed0-8444-417e479028d2","Type":"ContainerDied","Data":"99ff344d90d5bdd893d1e77e101cd6e34638c02acf7127cecbfee61fab7d69ad"} Jan 29 11:17:24 crc kubenswrapper[4593]: W0129 11:17:24.447758 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7926455_1b18_4907_831f_c8949c999c3e.slice/crio-9698be0e8b0fc8c29042394525e36839b9d1d98f661056973a2df3fda1b5b293 WatchSource:0}: Error finding container 9698be0e8b0fc8c29042394525e36839b9d1d98f661056973a2df3fda1b5b293: Status 404 returned error can't find the container with id 9698be0e8b0fc8c29042394525e36839b9d1d98f661056973a2df3fda1b5b293 Jan 29 11:17:24 crc kubenswrapper[4593]: I0129 11:17:24.589454 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2wbrt" Jan 29 11:17:24 crc kubenswrapper[4593]: I0129 11:17:24.605998 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c39458c0-d624-4ed0-8444-417e479028d2-combined-ca-bundle\") pod \"c39458c0-d624-4ed0-8444-417e479028d2\" (UID: \"c39458c0-d624-4ed0-8444-417e479028d2\") " Jan 29 11:17:24 crc kubenswrapper[4593]: I0129 11:17:24.606114 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c39458c0-d624-4ed0-8444-417e479028d2-db-sync-config-data\") pod \"c39458c0-d624-4ed0-8444-417e479028d2\" (UID: \"c39458c0-d624-4ed0-8444-417e479028d2\") " Jan 29 11:17:24 crc kubenswrapper[4593]: I0129 11:17:24.606306 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h678s\" (UniqueName: \"kubernetes.io/projected/c39458c0-d624-4ed0-8444-417e479028d2-kube-api-access-h678s\") pod \"c39458c0-d624-4ed0-8444-417e479028d2\" (UID: \"c39458c0-d624-4ed0-8444-417e479028d2\") " Jan 29 11:17:24 crc kubenswrapper[4593]: I0129 11:17:24.615357 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c39458c0-d624-4ed0-8444-417e479028d2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c39458c0-d624-4ed0-8444-417e479028d2" (UID: "c39458c0-d624-4ed0-8444-417e479028d2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:24 crc kubenswrapper[4593]: I0129 11:17:24.634324 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c39458c0-d624-4ed0-8444-417e479028d2-kube-api-access-h678s" (OuterVolumeSpecName: "kube-api-access-h678s") pod "c39458c0-d624-4ed0-8444-417e479028d2" (UID: "c39458c0-d624-4ed0-8444-417e479028d2"). InnerVolumeSpecName "kube-api-access-h678s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:17:24 crc kubenswrapper[4593]: I0129 11:17:24.674720 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c39458c0-d624-4ed0-8444-417e479028d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c39458c0-d624-4ed0-8444-417e479028d2" (UID: "c39458c0-d624-4ed0-8444-417e479028d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:24 crc kubenswrapper[4593]: I0129 11:17:24.708093 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h678s\" (UniqueName: \"kubernetes.io/projected/c39458c0-d624-4ed0-8444-417e479028d2-kube-api-access-h678s\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:24 crc kubenswrapper[4593]: I0129 11:17:24.708125 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c39458c0-d624-4ed0-8444-417e479028d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:24 crc kubenswrapper[4593]: I0129 11:17:24.708134 4593 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c39458c0-d624-4ed0-8444-417e479028d2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:24 crc kubenswrapper[4593]: I0129 11:17:24.872951 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" event={"ID":"c7926455-1b18-4907-831f-c8949c999c3e","Type":"ContainerStarted","Data":"9698be0e8b0fc8c29042394525e36839b9d1d98f661056973a2df3fda1b5b293"} Jan 29 11:17:24 crc kubenswrapper[4593]: I0129 11:17:24.874198 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2wbrt" event={"ID":"c39458c0-d624-4ed0-8444-417e479028d2","Type":"ContainerDied","Data":"48df691aa2eae747d4bfbb1c9e2a92cb2fce2abef2c0b184a7c467030b299d90"} Jan 29 11:17:24 crc kubenswrapper[4593]: I0129 11:17:24.874233 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48df691aa2eae747d4bfbb1c9e2a92cb2fce2abef2c0b184a7c467030b299d90" Jan 29 11:17:24 crc kubenswrapper[4593]: I0129 11:17:24.874292 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2wbrt" Jan 29 11:17:24 crc kubenswrapper[4593]: I0129 11:17:24.912210 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.052341 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5bdffb4784-5zp8q" podUID="be4a01cd-2eb7-48e8-8a7e-eb02f8851188" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.313556 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6cf8bfd486-7dlhx"] Jan 29 11:17:25 crc kubenswrapper[4593]: E0129 11:17:25.325076 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c39458c0-d624-4ed0-8444-417e479028d2" containerName="barbican-db-sync" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.325110 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="c39458c0-d624-4ed0-8444-417e479028d2" containerName="barbican-db-sync" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.325287 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="c39458c0-d624-4ed0-8444-417e479028d2" containerName="barbican-db-sync" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.326181 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.338112 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.346211 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-qf2gb" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.346577 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.346722 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5947965cdc-wl48v"] Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.348132 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.353096 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.378689 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6cf8bfd486-7dlhx"] Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.409700 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5947965cdc-wl48v"] Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.422183 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564d3b50-7cec-4913-bac8-64af532aa32f-combined-ca-bundle\") pod \"barbican-worker-5947965cdc-wl48v\" (UID: \"564d3b50-7cec-4913-bac8-64af532aa32f\") " pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.422522 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwnvt\" (UniqueName: \"kubernetes.io/projected/5f3c398f-928a-4f7e-9e76-6978b8a3673e-kube-api-access-bwnvt\") pod \"barbican-keystone-listener-6cf8bfd486-7dlhx\" (UID: \"5f3c398f-928a-4f7e-9e76-6978b8a3673e\") " pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.422710 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564d3b50-7cec-4913-bac8-64af532aa32f-config-data\") pod \"barbican-worker-5947965cdc-wl48v\" (UID: \"564d3b50-7cec-4913-bac8-64af532aa32f\") " pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.422958 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f3c398f-928a-4f7e-9e76-6978b8a3673e-config-data\") pod \"barbican-keystone-listener-6cf8bfd486-7dlhx\" (UID: \"5f3c398f-928a-4f7e-9e76-6978b8a3673e\") " pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.423122 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f3c398f-928a-4f7e-9e76-6978b8a3673e-logs\") pod \"barbican-keystone-listener-6cf8bfd486-7dlhx\" (UID: \"5f3c398f-928a-4f7e-9e76-6978b8a3673e\") " pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.423254 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg8v5\" (UniqueName: \"kubernetes.io/projected/564d3b50-7cec-4913-bac8-64af532aa32f-kube-api-access-wg8v5\") pod \"barbican-worker-5947965cdc-wl48v\" (UID: \"564d3b50-7cec-4913-bac8-64af532aa32f\") " pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.423373 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/564d3b50-7cec-4913-bac8-64af532aa32f-logs\") pod \"barbican-worker-5947965cdc-wl48v\" (UID: \"564d3b50-7cec-4913-bac8-64af532aa32f\") " pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.423501 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/564d3b50-7cec-4913-bac8-64af532aa32f-config-data-custom\") pod \"barbican-worker-5947965cdc-wl48v\" (UID: \"564d3b50-7cec-4913-bac8-64af532aa32f\") " pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.423625 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f3c398f-928a-4f7e-9e76-6978b8a3673e-combined-ca-bundle\") pod \"barbican-keystone-listener-6cf8bfd486-7dlhx\" (UID: \"5f3c398f-928a-4f7e-9e76-6978b8a3673e\") " pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.424050 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f3c398f-928a-4f7e-9e76-6978b8a3673e-config-data-custom\") pod \"barbican-keystone-listener-6cf8bfd486-7dlhx\" (UID: \"5f3c398f-928a-4f7e-9e76-6978b8a3673e\") " pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.520323 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zbcrq"] Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.527193 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f3c398f-928a-4f7e-9e76-6978b8a3673e-config-data\") pod \"barbican-keystone-listener-6cf8bfd486-7dlhx\" (UID: \"5f3c398f-928a-4f7e-9e76-6978b8a3673e\") " pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.527268 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f3c398f-928a-4f7e-9e76-6978b8a3673e-logs\") pod \"barbican-keystone-listener-6cf8bfd486-7dlhx\" (UID: \"5f3c398f-928a-4f7e-9e76-6978b8a3673e\") " pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.527304 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg8v5\" (UniqueName: \"kubernetes.io/projected/564d3b50-7cec-4913-bac8-64af532aa32f-kube-api-access-wg8v5\") pod \"barbican-worker-5947965cdc-wl48v\" (UID: \"564d3b50-7cec-4913-bac8-64af532aa32f\") " pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.527333 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/564d3b50-7cec-4913-bac8-64af532aa32f-logs\") pod \"barbican-worker-5947965cdc-wl48v\" (UID: \"564d3b50-7cec-4913-bac8-64af532aa32f\") " pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.527370 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/564d3b50-7cec-4913-bac8-64af532aa32f-config-data-custom\") pod \"barbican-worker-5947965cdc-wl48v\" (UID: \"564d3b50-7cec-4913-bac8-64af532aa32f\") " pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.527394 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f3c398f-928a-4f7e-9e76-6978b8a3673e-combined-ca-bundle\") pod \"barbican-keystone-listener-6cf8bfd486-7dlhx\" (UID: \"5f3c398f-928a-4f7e-9e76-6978b8a3673e\") " pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.527454 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f3c398f-928a-4f7e-9e76-6978b8a3673e-config-data-custom\") pod \"barbican-keystone-listener-6cf8bfd486-7dlhx\" (UID: \"5f3c398f-928a-4f7e-9e76-6978b8a3673e\") " pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.527513 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564d3b50-7cec-4913-bac8-64af532aa32f-combined-ca-bundle\") pod \"barbican-worker-5947965cdc-wl48v\" (UID: \"564d3b50-7cec-4913-bac8-64af532aa32f\") " pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.527538 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwnvt\" (UniqueName: \"kubernetes.io/projected/5f3c398f-928a-4f7e-9e76-6978b8a3673e-kube-api-access-bwnvt\") pod \"barbican-keystone-listener-6cf8bfd486-7dlhx\" (UID: \"5f3c398f-928a-4f7e-9e76-6978b8a3673e\") " pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.527578 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564d3b50-7cec-4913-bac8-64af532aa32f-config-data\") pod \"barbican-worker-5947965cdc-wl48v\" (UID: \"564d3b50-7cec-4913-bac8-64af532aa32f\") " pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.531666 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f3c398f-928a-4f7e-9e76-6978b8a3673e-logs\") pod \"barbican-keystone-listener-6cf8bfd486-7dlhx\" (UID: \"5f3c398f-928a-4f7e-9e76-6978b8a3673e\") " pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.535470 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f3c398f-928a-4f7e-9e76-6978b8a3673e-config-data-custom\") pod \"barbican-keystone-listener-6cf8bfd486-7dlhx\" (UID: \"5f3c398f-928a-4f7e-9e76-6978b8a3673e\") " pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.535523 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/564d3b50-7cec-4913-bac8-64af532aa32f-config-data-custom\") pod \"barbican-worker-5947965cdc-wl48v\" (UID: \"564d3b50-7cec-4913-bac8-64af532aa32f\") " pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.540045 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564d3b50-7cec-4913-bac8-64af532aa32f-config-data\") pod \"barbican-worker-5947965cdc-wl48v\" (UID: \"564d3b50-7cec-4913-bac8-64af532aa32f\") " pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.542483 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f3c398f-928a-4f7e-9e76-6978b8a3673e-combined-ca-bundle\") pod \"barbican-keystone-listener-6cf8bfd486-7dlhx\" (UID: \"5f3c398f-928a-4f7e-9e76-6978b8a3673e\") " pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.542857 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/564d3b50-7cec-4913-bac8-64af532aa32f-logs\") pod \"barbican-worker-5947965cdc-wl48v\" (UID: \"564d3b50-7cec-4913-bac8-64af532aa32f\") " pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.544113 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f3c398f-928a-4f7e-9e76-6978b8a3673e-config-data\") pod \"barbican-keystone-listener-6cf8bfd486-7dlhx\" (UID: \"5f3c398f-928a-4f7e-9e76-6978b8a3673e\") " pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.562978 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564d3b50-7cec-4913-bac8-64af532aa32f-combined-ca-bundle\") pod \"barbican-worker-5947965cdc-wl48v\" (UID: \"564d3b50-7cec-4913-bac8-64af532aa32f\") " pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.573532 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg8v5\" (UniqueName: \"kubernetes.io/projected/564d3b50-7cec-4913-bac8-64af532aa32f-kube-api-access-wg8v5\") pod \"barbican-worker-5947965cdc-wl48v\" (UID: \"564d3b50-7cec-4913-bac8-64af532aa32f\") " pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.588338 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwnvt\" (UniqueName: \"kubernetes.io/projected/5f3c398f-928a-4f7e-9e76-6978b8a3673e-kube-api-access-bwnvt\") pod \"barbican-keystone-listener-6cf8bfd486-7dlhx\" (UID: \"5f3c398f-928a-4f7e-9e76-6978b8a3673e\") " pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.613712 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-5pw58"] Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.615813 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.630611 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-config\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.630685 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.630739 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.630789 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5chz2\" (UniqueName: \"kubernetes.io/projected/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-kube-api-access-5chz2\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.630845 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.630880 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.684804 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-5pw58"] Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.695015 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.718081 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.735751 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.735831 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5chz2\" (UniqueName: \"kubernetes.io/projected/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-kube-api-access-5chz2\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.735900 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.735939 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.735999 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-config\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.736031 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.737111 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.737711 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.737825 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.737886 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-config\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.738246 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.795499 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5chz2\" (UniqueName: \"kubernetes.io/projected/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-kube-api-access-5chz2\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.803683 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-766cf76c8b-cjg59"] Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.805261 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.810717 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.844186 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5d54c2a-3590-4623-8641-e3906d9ef79e-logs\") pod \"barbican-api-766cf76c8b-cjg59\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.844528 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-config-data-custom\") pod \"barbican-api-766cf76c8b-cjg59\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.844617 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-config-data\") pod \"barbican-api-766cf76c8b-cjg59\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.844760 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf5mb\" (UniqueName: \"kubernetes.io/projected/f5d54c2a-3590-4623-8641-e3906d9ef79e-kube-api-access-bf5mb\") pod \"barbican-api-766cf76c8b-cjg59\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.844847 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-combined-ca-bundle\") pod \"barbican-api-766cf76c8b-cjg59\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.849907 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-766cf76c8b-cjg59"] Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.946582 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-config-data\") pod \"barbican-api-766cf76c8b-cjg59\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.946698 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf5mb\" (UniqueName: \"kubernetes.io/projected/f5d54c2a-3590-4623-8641-e3906d9ef79e-kube-api-access-bf5mb\") pod \"barbican-api-766cf76c8b-cjg59\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.946735 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-combined-ca-bundle\") pod \"barbican-api-766cf76c8b-cjg59\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.946830 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5d54c2a-3590-4623-8641-e3906d9ef79e-logs\") pod \"barbican-api-766cf76c8b-cjg59\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.946885 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-config-data-custom\") pod \"barbican-api-766cf76c8b-cjg59\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.950538 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5d54c2a-3590-4623-8641-e3906d9ef79e-logs\") pod \"barbican-api-766cf76c8b-cjg59\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.959068 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.971473 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf5mb\" (UniqueName: \"kubernetes.io/projected/f5d54c2a-3590-4623-8641-e3906d9ef79e-kube-api-access-bf5mb\") pod \"barbican-api-766cf76c8b-cjg59\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.976819 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-config-data-custom\") pod \"barbican-api-766cf76c8b-cjg59\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.978416 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-combined-ca-bundle\") pod \"barbican-api-766cf76c8b-cjg59\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.981736 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-config-data\") pod \"barbican-api-766cf76c8b-cjg59\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:26 crc kubenswrapper[4593]: I0129 11:17:26.151613 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:27 crc kubenswrapper[4593]: E0129 11:17:27.271876 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"ceilometer-notification-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="f789a029-2899-4cb2-8b99-55b77db98b9f" Jan 29 11:17:27 crc kubenswrapper[4593]: W0129 11:17:27.306937 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdda5015_0c28_4ab0_befd_715cb8a987e3.slice/crio-e6cec3c9c1b7ab68aa8f259aa6f901629b1b0285ad0c92831ba0cfa791bf0229 WatchSource:0}: Error finding container e6cec3c9c1b7ab68aa8f259aa6f901629b1b0285ad0c92831ba0cfa791bf0229: Status 404 returned error can't find the container with id e6cec3c9c1b7ab68aa8f259aa6f901629b1b0285ad0c92831ba0cfa791bf0229 Jan 29 11:17:27 crc kubenswrapper[4593]: I0129 11:17:27.311377 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:17:27 crc kubenswrapper[4593]: I0129 11:17:27.567685 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:17:27 crc kubenswrapper[4593]: I0129 11:17:27.640964 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-766cf76c8b-cjg59"] Jan 29 11:17:27 crc kubenswrapper[4593]: I0129 11:17:27.661435 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6cf8bfd486-7dlhx"] Jan 29 11:17:27 crc kubenswrapper[4593]: I0129 11:17:27.881597 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-5pw58"] Jan 29 11:17:27 crc kubenswrapper[4593]: I0129 11:17:27.972905 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-869645f564-n6fhc" event={"ID":"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747","Type":"ContainerStarted","Data":"7cb00c01315e420b93a8a3b56f18b13dfdf8bf1aee9c02e62e465749e77fa56e"} Jan 29 11:17:27 crc kubenswrapper[4593]: I0129 11:17:27.974058 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:27 crc kubenswrapper[4593]: I0129 11:17:27.974355 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:28 crc kubenswrapper[4593]: I0129 11:17:28.022905 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-869645f564-n6fhc" podStartSLOduration=12.02287861 podStartE2EDuration="12.02287861s" podCreationTimestamp="2026-01-29 11:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:17:28.01015918 +0000 UTC m=+1113.883193361" watchObservedRunningTime="2026-01-29 11:17:28.02287861 +0000 UTC m=+1113.895912801" Jan 29 11:17:28 crc kubenswrapper[4593]: I0129 11:17:28.048170 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f789a029-2899-4cb2-8b99-55b77db98b9f","Type":"ContainerStarted","Data":"7b1eb1b6d901dd51bc728c86fd706225cc5cc281da7ab6e6945cc5b869a8a179"} Jan 29 11:17:28 crc kubenswrapper[4593]: I0129 11:17:28.048798 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 11:17:28 crc kubenswrapper[4593]: I0129 11:17:28.049230 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f789a029-2899-4cb2-8b99-55b77db98b9f" containerName="proxy-httpd" containerID="cri-o://7b1eb1b6d901dd51bc728c86fd706225cc5cc281da7ab6e6945cc5b869a8a179" gracePeriod=30 Jan 29 11:17:28 crc kubenswrapper[4593]: I0129 11:17:28.094356 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdda5015-0c28-4ab0-befd-715cb8a987e3","Type":"ContainerStarted","Data":"e6cec3c9c1b7ab68aa8f259aa6f901629b1b0285ad0c92831ba0cfa791bf0229"} Jan 29 11:17:28 crc kubenswrapper[4593]: I0129 11:17:28.112982 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"50c0ed30-282a-446b-b0cc-f201e07cd2b5","Type":"ContainerStarted","Data":"7d27101e8eb2775200135497bf42bb1e384ed63a353e51a2c079db75d1e60d15"} Jan 29 11:17:28 crc kubenswrapper[4593]: I0129 11:17:28.155153 4593 generic.go:334] "Generic (PLEG): container finished" podID="9a0467fe-4786-4231-bf52-8a305e9a4f89" containerID="06197cae1e3adecc87ccca3058356e85b083a773c3ebd8eeabc6c5475d59dd8e" exitCode=0 Jan 29 11:17:28 crc kubenswrapper[4593]: I0129 11:17:28.155234 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qqbm9" event={"ID":"9a0467fe-4786-4231-bf52-8a305e9a4f89","Type":"ContainerDied","Data":"06197cae1e3adecc87ccca3058356e85b083a773c3ebd8eeabc6c5475d59dd8e"} Jan 29 11:17:28 crc kubenswrapper[4593]: I0129 11:17:28.175215 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" event={"ID":"5f3c398f-928a-4f7e-9e76-6978b8a3673e","Type":"ContainerStarted","Data":"cc1522410d38eada260e7227deef9aa8a3ddb52ee0c14975ca76ecce47f73dd2"} Jan 29 11:17:28 crc kubenswrapper[4593]: I0129 11:17:28.177475 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-766cf76c8b-cjg59" event={"ID":"f5d54c2a-3590-4623-8641-e3906d9ef79e","Type":"ContainerStarted","Data":"ab6230e4600dcb9af699c78e8e565ba5926552d85dcff4c655fbdfc2c4ef02b3"} Jan 29 11:17:28 crc kubenswrapper[4593]: I0129 11:17:28.180278 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5947965cdc-wl48v"] Jan 29 11:17:28 crc kubenswrapper[4593]: I0129 11:17:28.193823 4593 generic.go:334] "Generic (PLEG): container finished" podID="c7926455-1b18-4907-831f-c8949c999c3e" containerID="c61dc38ebb9e5834aa0947deaf7f60860b3b4b6689bf4392d11591aefe6c59f7" exitCode=0 Jan 29 11:17:28 crc kubenswrapper[4593]: I0129 11:17:28.194065 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" event={"ID":"c7926455-1b18-4907-831f-c8949c999c3e","Type":"ContainerDied","Data":"c61dc38ebb9e5834aa0947deaf7f60860b3b4b6689bf4392d11591aefe6c59f7"} Jan 29 11:17:28 crc kubenswrapper[4593]: I0129 11:17:28.198942 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" event={"ID":"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8","Type":"ContainerStarted","Data":"05de26e206fafddf17c6b67f5b66ecbc3caad8b51d7a1c1c245985e3b6e06f37"} Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.249558 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5947965cdc-wl48v" event={"ID":"564d3b50-7cec-4913-bac8-64af532aa32f","Type":"ContainerStarted","Data":"49834deb18122c31f2c3a60696ea136d4cad992a96000561c40ef8b0aa709f3b"} Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.256686 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-766cf76c8b-cjg59" event={"ID":"f5d54c2a-3590-4623-8641-e3906d9ef79e","Type":"ContainerStarted","Data":"3f764c87c1c674ee266ec11d50ead3b253a7e265b0c6c1414e01734443361b53"} Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.256742 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-766cf76c8b-cjg59" event={"ID":"f5d54c2a-3590-4623-8641-e3906d9ef79e","Type":"ContainerStarted","Data":"a01e77fb6bb6bed1e88e5489338322c67e46dee88919c812c4f49227de8602a4"} Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.257384 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.257626 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.285290 4593 generic.go:334] "Generic (PLEG): container finished" podID="f789a029-2899-4cb2-8b99-55b77db98b9f" containerID="7b1eb1b6d901dd51bc728c86fd706225cc5cc281da7ab6e6945cc5b869a8a179" exitCode=0 Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.285401 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f789a029-2899-4cb2-8b99-55b77db98b9f","Type":"ContainerDied","Data":"7b1eb1b6d901dd51bc728c86fd706225cc5cc281da7ab6e6945cc5b869a8a179"} Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.311556 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdda5015-0c28-4ab0-befd-715cb8a987e3","Type":"ContainerStarted","Data":"7c8b245da461c9d7cfe5494a143681450b958c3b59e7d2c1a13483a01ca4bb90"} Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.361951 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-766cf76c8b-cjg59" podStartSLOduration=4.361922213 podStartE2EDuration="4.361922213s" podCreationTimestamp="2026-01-29 11:17:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:17:29.333749051 +0000 UTC m=+1115.206783242" watchObservedRunningTime="2026-01-29 11:17:29.361922213 +0000 UTC m=+1115.234956514" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.403344 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.453348 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-dns-svc\") pod \"c7926455-1b18-4907-831f-c8949c999c3e\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.453450 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrv9n\" (UniqueName: \"kubernetes.io/projected/c7926455-1b18-4907-831f-c8949c999c3e-kube-api-access-xrv9n\") pod \"c7926455-1b18-4907-831f-c8949c999c3e\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.453511 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-config\") pod \"c7926455-1b18-4907-831f-c8949c999c3e\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.453560 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-ovsdbserver-sb\") pod \"c7926455-1b18-4907-831f-c8949c999c3e\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.453595 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-dns-swift-storage-0\") pod \"c7926455-1b18-4907-831f-c8949c999c3e\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.453716 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-ovsdbserver-nb\") pod \"c7926455-1b18-4907-831f-c8949c999c3e\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.494359 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7926455-1b18-4907-831f-c8949c999c3e-kube-api-access-xrv9n" (OuterVolumeSpecName: "kube-api-access-xrv9n") pod "c7926455-1b18-4907-831f-c8949c999c3e" (UID: "c7926455-1b18-4907-831f-c8949c999c3e"). InnerVolumeSpecName "kube-api-access-xrv9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.528744 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.547515 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c7926455-1b18-4907-831f-c8949c999c3e" (UID: "c7926455-1b18-4907-831f-c8949c999c3e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.551183 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c7926455-1b18-4907-831f-c8949c999c3e" (UID: "c7926455-1b18-4907-831f-c8949c999c3e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.556078 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c7926455-1b18-4907-831f-c8949c999c3e" (UID: "c7926455-1b18-4907-831f-c8949c999c3e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.557438 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrv9n\" (UniqueName: \"kubernetes.io/projected/c7926455-1b18-4907-831f-c8949c999c3e-kube-api-access-xrv9n\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.557464 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.557473 4593 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.557483 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.615782 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c7926455-1b18-4907-831f-c8949c999c3e" (UID: "c7926455-1b18-4907-831f-c8949c999c3e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.638071 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-config" (OuterVolumeSpecName: "config") pod "c7926455-1b18-4907-831f-c8949c999c3e" (UID: "c7926455-1b18-4907-831f-c8949c999c3e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.659394 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfxh7\" (UniqueName: \"kubernetes.io/projected/f789a029-2899-4cb2-8b99-55b77db98b9f-kube-api-access-mfxh7\") pod \"f789a029-2899-4cb2-8b99-55b77db98b9f\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.659517 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-scripts\") pod \"f789a029-2899-4cb2-8b99-55b77db98b9f\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.659544 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-combined-ca-bundle\") pod \"f789a029-2899-4cb2-8b99-55b77db98b9f\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.659598 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f789a029-2899-4cb2-8b99-55b77db98b9f-log-httpd\") pod \"f789a029-2899-4cb2-8b99-55b77db98b9f\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.659616 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-sg-core-conf-yaml\") pod \"f789a029-2899-4cb2-8b99-55b77db98b9f\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.659699 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-config-data\") pod \"f789a029-2899-4cb2-8b99-55b77db98b9f\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.659726 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f789a029-2899-4cb2-8b99-55b77db98b9f-run-httpd\") pod \"f789a029-2899-4cb2-8b99-55b77db98b9f\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.660585 4593 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.660611 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.661120 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f789a029-2899-4cb2-8b99-55b77db98b9f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f789a029-2899-4cb2-8b99-55b77db98b9f" (UID: "f789a029-2899-4cb2-8b99-55b77db98b9f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.716480 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f789a029-2899-4cb2-8b99-55b77db98b9f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f789a029-2899-4cb2-8b99-55b77db98b9f" (UID: "f789a029-2899-4cb2-8b99-55b77db98b9f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.720340 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f789a029-2899-4cb2-8b99-55b77db98b9f" (UID: "f789a029-2899-4cb2-8b99-55b77db98b9f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.723382 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f789a029-2899-4cb2-8b99-55b77db98b9f-kube-api-access-mfxh7" (OuterVolumeSpecName: "kube-api-access-mfxh7") pod "f789a029-2899-4cb2-8b99-55b77db98b9f" (UID: "f789a029-2899-4cb2-8b99-55b77db98b9f"). InnerVolumeSpecName "kube-api-access-mfxh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.730180 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-scripts" (OuterVolumeSpecName: "scripts") pod "f789a029-2899-4cb2-8b99-55b77db98b9f" (UID: "f789a029-2899-4cb2-8b99-55b77db98b9f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.765890 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfxh7\" (UniqueName: \"kubernetes.io/projected/f789a029-2899-4cb2-8b99-55b77db98b9f-kube-api-access-mfxh7\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.765927 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.765937 4593 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f789a029-2899-4cb2-8b99-55b77db98b9f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.765946 4593 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.765954 4593 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f789a029-2899-4cb2-8b99-55b77db98b9f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.772666 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-config-data" (OuterVolumeSpecName: "config-data") pod "f789a029-2899-4cb2-8b99-55b77db98b9f" (UID: "f789a029-2899-4cb2-8b99-55b77db98b9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.784113 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f789a029-2899-4cb2-8b99-55b77db98b9f" (UID: "f789a029-2899-4cb2-8b99-55b77db98b9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.873907 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.874275 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.046552 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-59844fc4b6-zctck"] Jan 29 11:17:30 crc kubenswrapper[4593]: E0129 11:17:30.047327 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f789a029-2899-4cb2-8b99-55b77db98b9f" containerName="proxy-httpd" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.047343 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f789a029-2899-4cb2-8b99-55b77db98b9f" containerName="proxy-httpd" Jan 29 11:17:30 crc kubenswrapper[4593]: E0129 11:17:30.047375 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7926455-1b18-4907-831f-c8949c999c3e" containerName="init" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.047383 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7926455-1b18-4907-831f-c8949c999c3e" containerName="init" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.047607 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7926455-1b18-4907-831f-c8949c999c3e" containerName="init" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.047669 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f789a029-2899-4cb2-8b99-55b77db98b9f" containerName="proxy-httpd" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.051383 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.062144 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.062392 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.068490 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-59844fc4b6-zctck"] Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.198748 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgc9p\" (UniqueName: \"kubernetes.io/projected/07d138d8-a5fa-4b77-80e5-924dba8de4c0-kube-api-access-qgc9p\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.198979 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07d138d8-a5fa-4b77-80e5-924dba8de4c0-public-tls-certs\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.202213 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07d138d8-a5fa-4b77-80e5-924dba8de4c0-logs\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.202268 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07d138d8-a5fa-4b77-80e5-924dba8de4c0-config-data\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.202415 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07d138d8-a5fa-4b77-80e5-924dba8de4c0-config-data-custom\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.202500 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07d138d8-a5fa-4b77-80e5-924dba8de4c0-internal-tls-certs\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.202540 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07d138d8-a5fa-4b77-80e5-924dba8de4c0-combined-ca-bundle\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.304685 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07d138d8-a5fa-4b77-80e5-924dba8de4c0-public-tls-certs\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.304764 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07d138d8-a5fa-4b77-80e5-924dba8de4c0-logs\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.304787 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07d138d8-a5fa-4b77-80e5-924dba8de4c0-config-data\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.304841 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07d138d8-a5fa-4b77-80e5-924dba8de4c0-config-data-custom\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.304877 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07d138d8-a5fa-4b77-80e5-924dba8de4c0-internal-tls-certs\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.304900 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07d138d8-a5fa-4b77-80e5-924dba8de4c0-combined-ca-bundle\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.304927 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgc9p\" (UniqueName: \"kubernetes.io/projected/07d138d8-a5fa-4b77-80e5-924dba8de4c0-kube-api-access-qgc9p\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.307143 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07d138d8-a5fa-4b77-80e5-924dba8de4c0-logs\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.312448 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07d138d8-a5fa-4b77-80e5-924dba8de4c0-public-tls-certs\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.313470 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07d138d8-a5fa-4b77-80e5-924dba8de4c0-combined-ca-bundle\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.314986 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07d138d8-a5fa-4b77-80e5-924dba8de4c0-internal-tls-certs\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.332456 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07d138d8-a5fa-4b77-80e5-924dba8de4c0-config-data-custom\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.350894 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07d138d8-a5fa-4b77-80e5-924dba8de4c0-config-data\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.351609 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgc9p\" (UniqueName: \"kubernetes.io/projected/07d138d8-a5fa-4b77-80e5-924dba8de4c0-kube-api-access-qgc9p\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.374788 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" event={"ID":"c7926455-1b18-4907-831f-c8949c999c3e","Type":"ContainerDied","Data":"9698be0e8b0fc8c29042394525e36839b9d1d98f661056973a2df3fda1b5b293"} Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.374871 4593 scope.go:117] "RemoveContainer" containerID="c61dc38ebb9e5834aa0947deaf7f60860b3b4b6689bf4392d11591aefe6c59f7" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.375083 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.415139 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"50c0ed30-282a-446b-b0cc-f201e07cd2b5","Type":"ContainerStarted","Data":"0140ab8d3e8cf8b2bdec9ddf8c25ab28c14ce7ffb775e171fd3a491b545310cb"} Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.443990 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.443995 4593 generic.go:334] "Generic (PLEG): container finished" podID="037d1b1d-fa9c-4a8f-8403-46de0acfa1d8" containerID="715e647703b26a590bd9c34541d425220134bcfb800847b738a35414acceb9c1" exitCode=0 Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.444091 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" event={"ID":"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8","Type":"ContainerDied","Data":"715e647703b26a590bd9c34541d425220134bcfb800847b738a35414acceb9c1"} Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.445586 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.453089 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.458553 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f789a029-2899-4cb2-8b99-55b77db98b9f","Type":"ContainerDied","Data":"81e674e8a5ccd570da2b45a02c26820c6aece1f8b0def79a73d4b051b04177a1"} Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.458781 4593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.516862 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-scripts\") pod \"9a0467fe-4786-4231-bf52-8a305e9a4f89\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.517178 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-combined-ca-bundle\") pod \"9a0467fe-4786-4231-bf52-8a305e9a4f89\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.517247 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9a0467fe-4786-4231-bf52-8a305e9a4f89-etc-machine-id\") pod \"9a0467fe-4786-4231-bf52-8a305e9a4f89\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.517272 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-config-data\") pod \"9a0467fe-4786-4231-bf52-8a305e9a4f89\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.517293 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-db-sync-config-data\") pod \"9a0467fe-4786-4231-bf52-8a305e9a4f89\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.517408 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb8cj\" (UniqueName: \"kubernetes.io/projected/9a0467fe-4786-4231-bf52-8a305e9a4f89-kube-api-access-hb8cj\") pod \"9a0467fe-4786-4231-bf52-8a305e9a4f89\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.539407 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-scripts" (OuterVolumeSpecName: "scripts") pod "9a0467fe-4786-4231-bf52-8a305e9a4f89" (UID: "9a0467fe-4786-4231-bf52-8a305e9a4f89"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.540519 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a0467fe-4786-4231-bf52-8a305e9a4f89-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9a0467fe-4786-4231-bf52-8a305e9a4f89" (UID: "9a0467fe-4786-4231-bf52-8a305e9a4f89"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.552779 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a0467fe-4786-4231-bf52-8a305e9a4f89-kube-api-access-hb8cj" (OuterVolumeSpecName: "kube-api-access-hb8cj") pod "9a0467fe-4786-4231-bf52-8a305e9a4f89" (UID: "9a0467fe-4786-4231-bf52-8a305e9a4f89"). InnerVolumeSpecName "kube-api-access-hb8cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.553756 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9a0467fe-4786-4231-bf52-8a305e9a4f89" (UID: "9a0467fe-4786-4231-bf52-8a305e9a4f89"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.580318 4593 scope.go:117] "RemoveContainer" containerID="7b1eb1b6d901dd51bc728c86fd706225cc5cc281da7ab6e6945cc5b869a8a179" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.622104 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.622150 4593 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9a0467fe-4786-4231-bf52-8a305e9a4f89-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.622164 4593 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.622175 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb8cj\" (UniqueName: \"kubernetes.io/projected/9a0467fe-4786-4231-bf52-8a305e9a4f89-kube-api-access-hb8cj\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.627011 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zbcrq"] Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.647522 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a0467fe-4786-4231-bf52-8a305e9a4f89" (UID: "9a0467fe-4786-4231-bf52-8a305e9a4f89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.660056 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zbcrq"] Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.725902 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.787941 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-config-data" (OuterVolumeSpecName: "config-data") pod "9a0467fe-4786-4231-bf52-8a305e9a4f89" (UID: "9a0467fe-4786-4231-bf52-8a305e9a4f89"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.832007 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.916268 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.916334 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.927987 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:17:30 crc kubenswrapper[4593]: E0129 11:17:30.928392 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a0467fe-4786-4231-bf52-8a305e9a4f89" containerName="cinder-db-sync" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.928413 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a0467fe-4786-4231-bf52-8a305e9a4f89" containerName="cinder-db-sync" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.928582 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a0467fe-4786-4231-bf52-8a305e9a4f89" containerName="cinder-db-sync" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.930481 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.937664 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.937677 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.951401 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.038363 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-scripts\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.038654 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-config-data\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.038693 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bdq8\" (UniqueName: \"kubernetes.io/projected/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-kube-api-access-6bdq8\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.038735 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.038786 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.038825 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-run-httpd\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.038872 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-log-httpd\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.089088 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7926455-1b18-4907-831f-c8949c999c3e" path="/var/lib/kubelet/pods/c7926455-1b18-4907-831f-c8949c999c3e/volumes" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.089912 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f789a029-2899-4cb2-8b99-55b77db98b9f" path="/var/lib/kubelet/pods/f789a029-2899-4cb2-8b99-55b77db98b9f/volumes" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.140805 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-config-data\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.140860 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bdq8\" (UniqueName: \"kubernetes.io/projected/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-kube-api-access-6bdq8\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.140907 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.140969 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.141008 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-run-httpd\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.141047 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-log-httpd\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.141071 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-scripts\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.179422 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-run-httpd\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.181897 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-log-httpd\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.183740 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-config-data\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.185328 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.185385 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-scripts\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.186042 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.209583 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bdq8\" (UniqueName: \"kubernetes.io/projected/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-kube-api-access-6bdq8\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.277318 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.331309 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:17:31 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:17:31 crc kubenswrapper[4593]: > Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.331389 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.332103 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"392c83c8b20810b83ec9a5ece7d4422790dc84f02f822abe01aa473a1c9a74d9"} pod="openshift-marketplace/redhat-operators-k4l8n" containerMessage="Container registry-server failed startup probe, will be restarted" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.332132 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" containerID="cri-o://392c83c8b20810b83ec9a5ece7d4422790dc84f02f822abe01aa473a1c9a74d9" gracePeriod=30 Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.482657 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qqbm9" event={"ID":"9a0467fe-4786-4231-bf52-8a305e9a4f89","Type":"ContainerDied","Data":"4a77796204d00631fc171e9b5f3f1adaf76dc3ea5c4251742c0c78ae086cb84b"} Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.482704 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a77796204d00631fc171e9b5f3f1adaf76dc3ea5c4251742c0c78ae086cb84b" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.482780 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.703285 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-59844fc4b6-zctck"] Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.156711 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.174293 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.191206 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.201216 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-jhpvr" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.237018 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.237283 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.237461 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.244157 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.298898 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10756552-28da-4e84-9c43-fb2be288e81f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.298959 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smgc5\" (UniqueName: \"kubernetes.io/projected/10756552-28da-4e84-9c43-fb2be288e81f-kube-api-access-smgc5\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.299044 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-scripts\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.299072 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.299163 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-config-data\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.299199 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.326406 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-5pw58"] Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.372644 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb"] Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.374138 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.405055 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10756552-28da-4e84-9c43-fb2be288e81f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.405104 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smgc5\" (UniqueName: \"kubernetes.io/projected/10756552-28da-4e84-9c43-fb2be288e81f-kube-api-access-smgc5\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.405160 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-scripts\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.405179 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.405231 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-config-data\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.405251 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.408500 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10756552-28da-4e84-9c43-fb2be288e81f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.428701 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.433794 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-config-data\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.436522 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-scripts\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.444161 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smgc5\" (UniqueName: \"kubernetes.io/projected/10756552-28da-4e84-9c43-fb2be288e81f-kube-api-access-smgc5\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.445031 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.502166 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59844fc4b6-zctck" event={"ID":"07d138d8-a5fa-4b77-80e5-924dba8de4c0","Type":"ContainerStarted","Data":"cc2bf57001fb03a85840206e84847299fc4e42a35c3541ce09565299fe34a0a7"} Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.503486 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"852a4805-5ddc-4a1d-a642-9d5e6bbb9206","Type":"ContainerStarted","Data":"0eb50a3ac1f633cc99edb2df912ed9ee0643f4c8b02ce477d7d327cbda5af774"} Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.506963 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwb96\" (UniqueName: \"kubernetes.io/projected/cad93c02-cde3-4a50-9f89-1800d0436d2d-kube-api-access-cwb96\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.507044 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-dns-svc\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.507070 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-config\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.507123 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.507140 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.507157 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.507493 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb"] Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.564152 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.609342 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwb96\" (UniqueName: \"kubernetes.io/projected/cad93c02-cde3-4a50-9f89-1800d0436d2d-kube-api-access-cwb96\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.609680 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-dns-svc\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.609711 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-config\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.609758 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.609781 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.609805 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.610831 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.611949 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-dns-svc\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.612579 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-config\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.613241 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.614602 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.644364 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwb96\" (UniqueName: \"kubernetes.io/projected/cad93c02-cde3-4a50-9f89-1800d0436d2d-kube-api-access-cwb96\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.697360 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.925897 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.927472 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.948921 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.002805 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.023914 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/95847704-1027-4518-9f5c-cd663496b804-etc-machine-id\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.024025 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-scripts\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.024132 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95847704-1027-4518-9f5c-cd663496b804-logs\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.024231 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.024272 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-config-data-custom\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.024423 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-config-data\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.024463 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg67x\" (UniqueName: \"kubernetes.io/projected/95847704-1027-4518-9f5c-cd663496b804-kube-api-access-sg67x\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.130179 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95847704-1027-4518-9f5c-cd663496b804-logs\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.130883 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.131131 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-config-data-custom\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.131401 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-config-data\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.131523 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg67x\" (UniqueName: \"kubernetes.io/projected/95847704-1027-4518-9f5c-cd663496b804-kube-api-access-sg67x\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.131783 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/95847704-1027-4518-9f5c-cd663496b804-etc-machine-id\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.133976 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/95847704-1027-4518-9f5c-cd663496b804-etc-machine-id\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.130993 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95847704-1027-4518-9f5c-cd663496b804-logs\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.134527 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-scripts\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.142437 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-config-data\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.144369 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-scripts\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.145118 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.148253 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-config-data-custom\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.154204 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg67x\" (UniqueName: \"kubernetes.io/projected/95847704-1027-4518-9f5c-cd663496b804-kube-api-access-sg67x\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.309031 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.501513 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 11:17:33 crc kubenswrapper[4593]: W0129 11:17:33.509429 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10756552_28da_4e84_9c43_fb2be288e81f.slice/crio-966232a0b0262262a982b33e0fb01619e0942fc49fb0be06397f90be642babf0 WatchSource:0}: Error finding container 966232a0b0262262a982b33e0fb01619e0942fc49fb0be06397f90be642babf0: Status 404 returned error can't find the container with id 966232a0b0262262a982b33e0fb01619e0942fc49fb0be06397f90be642babf0 Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.519104 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59844fc4b6-zctck" event={"ID":"07d138d8-a5fa-4b77-80e5-924dba8de4c0","Type":"ContainerStarted","Data":"1dcb0d3ad44597fda668b536d2258c06dfb2f2f9f795f671928b7f0edbcbbc80"} Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.535603 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"50c0ed30-282a-446b-b0cc-f201e07cd2b5","Type":"ContainerStarted","Data":"98253b10b8a6ff59d034c63fee78761a0019ffa08c9b2a1a3f935e859663925d"} Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.535870 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="50c0ed30-282a-446b-b0cc-f201e07cd2b5" containerName="glance-log" containerID="cri-o://0140ab8d3e8cf8b2bdec9ddf8c25ab28c14ce7ffb775e171fd3a491b545310cb" gracePeriod=30 Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.537235 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="50c0ed30-282a-446b-b0cc-f201e07cd2b5" containerName="glance-httpd" containerID="cri-o://98253b10b8a6ff59d034c63fee78761a0019ffa08c9b2a1a3f935e859663925d" gracePeriod=30 Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.591765 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=16.591744839 podStartE2EDuration="16.591744839s" podCreationTimestamp="2026-01-29 11:17:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:17:33.57305608 +0000 UTC m=+1119.446090271" watchObservedRunningTime="2026-01-29 11:17:33.591744839 +0000 UTC m=+1119.464779030" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.715194 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb"] Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.986144 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.596600 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"95847704-1027-4518-9f5c-cd663496b804","Type":"ContainerStarted","Data":"5819a6ffae38a266d2b0e8c7f0f4a9a9ec8806aff42d69e8d72319628c862e12"} Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.617980 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdda5015-0c28-4ab0-befd-715cb8a987e3","Type":"ContainerStarted","Data":"6349a6f4b6a687bdb26600a20fac4d160a672a92f042b686a1b78088c5890856"} Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.618040 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fdda5015-0c28-4ab0-befd-715cb8a987e3" containerName="glance-log" containerID="cri-o://7c8b245da461c9d7cfe5494a143681450b958c3b59e7d2c1a13483a01ca4bb90" gracePeriod=30 Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.618135 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fdda5015-0c28-4ab0-befd-715cb8a987e3" containerName="glance-httpd" containerID="cri-o://6349a6f4b6a687bdb26600a20fac4d160a672a92f042b686a1b78088c5890856" gracePeriod=30 Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.641951 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59844fc4b6-zctck" event={"ID":"07d138d8-a5fa-4b77-80e5-924dba8de4c0","Type":"ContainerStarted","Data":"d88270e238fb0280c9be483c689ea2ab0ed9693bd426148cd79f03f059fc5e20"} Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.642297 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.642464 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.672915 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=17.672893281 podStartE2EDuration="17.672893281s" podCreationTimestamp="2026-01-29 11:17:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:17:34.656181735 +0000 UTC m=+1120.529215926" watchObservedRunningTime="2026-01-29 11:17:34.672893281 +0000 UTC m=+1120.545927472" Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.705199 4593 generic.go:334] "Generic (PLEG): container finished" podID="50c0ed30-282a-446b-b0cc-f201e07cd2b5" containerID="98253b10b8a6ff59d034c63fee78761a0019ffa08c9b2a1a3f935e859663925d" exitCode=143 Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.705468 4593 generic.go:334] "Generic (PLEG): container finished" podID="50c0ed30-282a-446b-b0cc-f201e07cd2b5" containerID="0140ab8d3e8cf8b2bdec9ddf8c25ab28c14ce7ffb775e171fd3a491b545310cb" exitCode=143 Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.705279 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"50c0ed30-282a-446b-b0cc-f201e07cd2b5","Type":"ContainerDied","Data":"98253b10b8a6ff59d034c63fee78761a0019ffa08c9b2a1a3f935e859663925d"} Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.705608 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"50c0ed30-282a-446b-b0cc-f201e07cd2b5","Type":"ContainerDied","Data":"0140ab8d3e8cf8b2bdec9ddf8c25ab28c14ce7ffb775e171fd3a491b545310cb"} Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.738276 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-59844fc4b6-zctck" podStartSLOduration=4.7382517140000004 podStartE2EDuration="4.738251714s" podCreationTimestamp="2026-01-29 11:17:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:17:34.692654438 +0000 UTC m=+1120.565688629" watchObservedRunningTime="2026-01-29 11:17:34.738251714 +0000 UTC m=+1120.611285905" Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.769415 4593 generic.go:334] "Generic (PLEG): container finished" podID="cad93c02-cde3-4a50-9f89-1800d0436d2d" containerID="b5db7de407f29070d58723bcbb491e8220b21b0f76aba938e6b5ac7b8b233fc5" exitCode=0 Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.769503 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" event={"ID":"cad93c02-cde3-4a50-9f89-1800d0436d2d","Type":"ContainerDied","Data":"b5db7de407f29070d58723bcbb491e8220b21b0f76aba938e6b5ac7b8b233fc5"} Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.769529 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" event={"ID":"cad93c02-cde3-4a50-9f89-1800d0436d2d","Type":"ContainerStarted","Data":"564ff28580e51f15a586a4b36ebebac1a1de37d8a71b76aea863a2b018150e6b"} Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.847826 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" event={"ID":"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8","Type":"ContainerStarted","Data":"acc2ab1ca6452852fce166472b7f5c7988a09acccb46e0bcd818f3a7b6b7f432"} Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.847840 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" podUID="037d1b1d-fa9c-4a8f-8403-46de0acfa1d8" containerName="dnsmasq-dns" containerID="cri-o://acc2ab1ca6452852fce166472b7f5c7988a09acccb46e0bcd818f3a7b6b7f432" gracePeriod=10 Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.848166 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.859344 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"10756552-28da-4e84-9c43-fb2be288e81f","Type":"ContainerStarted","Data":"966232a0b0262262a982b33e0fb01619e0942fc49fb0be06397f90be642babf0"} Jan 29 11:17:34 crc kubenswrapper[4593]: E0129 11:17:34.902975 4593 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcad93c02_cde3_4a50_9f89_1800d0436d2d.slice/crio-conmon-b5db7de407f29070d58723bcbb491e8220b21b0f76aba938e6b5ac7b8b233fc5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdda5015_0c28_4ab0_befd_715cb8a987e3.slice/crio-6349a6f4b6a687bdb26600a20fac4d160a672a92f042b686a1b78088c5890856.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcad93c02_cde3_4a50_9f89_1800d0436d2d.slice/crio-b5db7de407f29070d58723bcbb491e8220b21b0f76aba938e6b5ac7b8b233fc5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdda5015_0c28_4ab0_befd_715cb8a987e3.slice/crio-conmon-6349a6f4b6a687bdb26600a20fac4d160a672a92f042b686a1b78088c5890856.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdda5015_0c28_4ab0_befd_715cb8a987e3.slice/crio-conmon-7c8b245da461c9d7cfe5494a143681450b958c3b59e7d2c1a13483a01ca4bb90.scope\": RecentStats: unable to find data in memory cache]" Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.914671 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 29 11:17:35 crc kubenswrapper[4593]: I0129 11:17:35.049408 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5bdffb4784-5zp8q" podUID="be4a01cd-2eb7-48e8-8a7e-eb02f8851188" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 29 11:17:35 crc kubenswrapper[4593]: I0129 11:17:35.122698 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" podStartSLOduration=10.122679453 podStartE2EDuration="10.122679453s" podCreationTimestamp="2026-01-29 11:17:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:17:34.895923782 +0000 UTC m=+1120.768957973" watchObservedRunningTime="2026-01-29 11:17:35.122679453 +0000 UTC m=+1120.995713644" Jan 29 11:17:35 crc kubenswrapper[4593]: I0129 11:17:35.179776 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 11:17:35 crc kubenswrapper[4593]: I0129 11:17:35.870132 4593 generic.go:334] "Generic (PLEG): container finished" podID="fdda5015-0c28-4ab0-befd-715cb8a987e3" containerID="6349a6f4b6a687bdb26600a20fac4d160a672a92f042b686a1b78088c5890856" exitCode=143 Jan 29 11:17:35 crc kubenswrapper[4593]: I0129 11:17:35.870445 4593 generic.go:334] "Generic (PLEG): container finished" podID="fdda5015-0c28-4ab0-befd-715cb8a987e3" containerID="7c8b245da461c9d7cfe5494a143681450b958c3b59e7d2c1a13483a01ca4bb90" exitCode=143 Jan 29 11:17:35 crc kubenswrapper[4593]: I0129 11:17:35.870346 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdda5015-0c28-4ab0-befd-715cb8a987e3","Type":"ContainerDied","Data":"6349a6f4b6a687bdb26600a20fac4d160a672a92f042b686a1b78088c5890856"} Jan 29 11:17:35 crc kubenswrapper[4593]: I0129 11:17:35.870527 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdda5015-0c28-4ab0-befd-715cb8a987e3","Type":"ContainerDied","Data":"7c8b245da461c9d7cfe5494a143681450b958c3b59e7d2c1a13483a01ca4bb90"} Jan 29 11:17:35 crc kubenswrapper[4593]: I0129 11:17:35.874542 4593 generic.go:334] "Generic (PLEG): container finished" podID="037d1b1d-fa9c-4a8f-8403-46de0acfa1d8" containerID="acc2ab1ca6452852fce166472b7f5c7988a09acccb46e0bcd818f3a7b6b7f432" exitCode=0 Jan 29 11:17:35 crc kubenswrapper[4593]: I0129 11:17:35.874625 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" event={"ID":"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8","Type":"ContainerDied","Data":"acc2ab1ca6452852fce166472b7f5c7988a09acccb46e0bcd818f3a7b6b7f432"} Jan 29 11:17:36 crc kubenswrapper[4593]: I0129 11:17:36.888889 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"95847704-1027-4518-9f5c-cd663496b804","Type":"ContainerStarted","Data":"6ca508da8e21ef8dd7d2c43f12f73a45b855f01c94f63172557349f3344fc6c9"} Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.627999 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.789194 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-config\") pod \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.789352 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-ovsdbserver-nb\") pod \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.789430 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-dns-svc\") pod \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.789459 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-dns-swift-storage-0\") pod \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.789533 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5chz2\" (UniqueName: \"kubernetes.io/projected/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-kube-api-access-5chz2\") pod \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.789716 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-ovsdbserver-sb\") pod \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.816129 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-kube-api-access-5chz2" (OuterVolumeSpecName: "kube-api-access-5chz2") pod "037d1b1d-fa9c-4a8f-8403-46de0acfa1d8" (UID: "037d1b1d-fa9c-4a8f-8403-46de0acfa1d8"). InnerVolumeSpecName "kube-api-access-5chz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.883369 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "037d1b1d-fa9c-4a8f-8403-46de0acfa1d8" (UID: "037d1b1d-fa9c-4a8f-8403-46de0acfa1d8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.888725 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "037d1b1d-fa9c-4a8f-8403-46de0acfa1d8" (UID: "037d1b1d-fa9c-4a8f-8403-46de0acfa1d8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.891887 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.892341 4593 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.892384 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5chz2\" (UniqueName: \"kubernetes.io/projected/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-kube-api-access-5chz2\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.892396 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.940109 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"50c0ed30-282a-446b-b0cc-f201e07cd2b5","Type":"ContainerDied","Data":"7d27101e8eb2775200135497bf42bb1e384ed63a353e51a2c079db75d1e60d15"} Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.940156 4593 scope.go:117] "RemoveContainer" containerID="98253b10b8a6ff59d034c63fee78761a0019ffa08c9b2a1a3f935e859663925d" Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.940279 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.962725 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "037d1b1d-fa9c-4a8f-8403-46de0acfa1d8" (UID: "037d1b1d-fa9c-4a8f-8403-46de0acfa1d8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.963855 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "037d1b1d-fa9c-4a8f-8403-46de0acfa1d8" (UID: "037d1b1d-fa9c-4a8f-8403-46de0acfa1d8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.987119 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-config" (OuterVolumeSpecName: "config") pod "037d1b1d-fa9c-4a8f-8403-46de0acfa1d8" (UID: "037d1b1d-fa9c-4a8f-8403-46de0acfa1d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.997309 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvcks\" (UniqueName: \"kubernetes.io/projected/50c0ed30-282a-446b-b0cc-f201e07cd2b5-kube-api-access-nvcks\") pod \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.997386 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50c0ed30-282a-446b-b0cc-f201e07cd2b5-logs\") pod \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.997463 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.997490 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50c0ed30-282a-446b-b0cc-f201e07cd2b5-httpd-run\") pod \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.997591 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-combined-ca-bundle\") pod \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.997616 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-scripts\") pod \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.997658 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-config-data\") pod \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.998134 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.998147 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.998156 4593 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:37.999460 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50c0ed30-282a-446b-b0cc-f201e07cd2b5-logs" (OuterVolumeSpecName: "logs") pod "50c0ed30-282a-446b-b0cc-f201e07cd2b5" (UID: "50c0ed30-282a-446b-b0cc-f201e07cd2b5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:37.999781 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" event={"ID":"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8","Type":"ContainerDied","Data":"05de26e206fafddf17c6b67f5b66ecbc3caad8b51d7a1c1c245985e3b6e06f37"} Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:37.999867 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:37.999999 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50c0ed30-282a-446b-b0cc-f201e07cd2b5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "50c0ed30-282a-446b-b0cc-f201e07cd2b5" (UID: "50c0ed30-282a-446b-b0cc-f201e07cd2b5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.011997 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50c0ed30-282a-446b-b0cc-f201e07cd2b5-kube-api-access-nvcks" (OuterVolumeSpecName: "kube-api-access-nvcks") pod "50c0ed30-282a-446b-b0cc-f201e07cd2b5" (UID: "50c0ed30-282a-446b-b0cc-f201e07cd2b5"). InnerVolumeSpecName "kube-api-access-nvcks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.014543 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "50c0ed30-282a-446b-b0cc-f201e07cd2b5" (UID: "50c0ed30-282a-446b-b0cc-f201e07cd2b5"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.030825 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-scripts" (OuterVolumeSpecName: "scripts") pod "50c0ed30-282a-446b-b0cc-f201e07cd2b5" (UID: "50c0ed30-282a-446b-b0cc-f201e07cd2b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.059199 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-5pw58"] Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.066545 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-5pw58"] Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.103316 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.103364 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvcks\" (UniqueName: \"kubernetes.io/projected/50c0ed30-282a-446b-b0cc-f201e07cd2b5-kube-api-access-nvcks\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.103375 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50c0ed30-282a-446b-b0cc-f201e07cd2b5-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.103407 4593 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.103419 4593 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50c0ed30-282a-446b-b0cc-f201e07cd2b5-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.123631 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.182074 4593 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.206039 4593 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.253737 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50c0ed30-282a-446b-b0cc-f201e07cd2b5" (UID: "50c0ed30-282a-446b-b0cc-f201e07cd2b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.259743 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-config-data" (OuterVolumeSpecName: "config-data") pod "50c0ed30-282a-446b-b0cc-f201e07cd2b5" (UID: "50c0ed30-282a-446b-b0cc-f201e07cd2b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.308960 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.309021 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.455285 4593 scope.go:117] "RemoveContainer" containerID="0140ab8d3e8cf8b2bdec9ddf8c25ab28c14ce7ffb775e171fd3a491b545310cb" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.901384 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.909221 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.924398 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.936845 4593 scope.go:117] "RemoveContainer" containerID="acc2ab1ca6452852fce166472b7f5c7988a09acccb46e0bcd818f3a7b6b7f432" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.982272 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:17:38 crc kubenswrapper[4593]: E0129 11:17:38.982671 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdda5015-0c28-4ab0-befd-715cb8a987e3" containerName="glance-log" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.982685 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdda5015-0c28-4ab0-befd-715cb8a987e3" containerName="glance-log" Jan 29 11:17:38 crc kubenswrapper[4593]: E0129 11:17:38.982701 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50c0ed30-282a-446b-b0cc-f201e07cd2b5" containerName="glance-log" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.982707 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="50c0ed30-282a-446b-b0cc-f201e07cd2b5" containerName="glance-log" Jan 29 11:17:38 crc kubenswrapper[4593]: E0129 11:17:38.982720 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50c0ed30-282a-446b-b0cc-f201e07cd2b5" containerName="glance-httpd" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.982725 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="50c0ed30-282a-446b-b0cc-f201e07cd2b5" containerName="glance-httpd" Jan 29 11:17:38 crc kubenswrapper[4593]: E0129 11:17:38.982737 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="037d1b1d-fa9c-4a8f-8403-46de0acfa1d8" containerName="dnsmasq-dns" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.982743 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="037d1b1d-fa9c-4a8f-8403-46de0acfa1d8" containerName="dnsmasq-dns" Jan 29 11:17:38 crc kubenswrapper[4593]: E0129 11:17:38.982750 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="037d1b1d-fa9c-4a8f-8403-46de0acfa1d8" containerName="init" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.982755 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="037d1b1d-fa9c-4a8f-8403-46de0acfa1d8" containerName="init" Jan 29 11:17:38 crc kubenswrapper[4593]: E0129 11:17:38.982762 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdda5015-0c28-4ab0-befd-715cb8a987e3" containerName="glance-httpd" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.982767 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdda5015-0c28-4ab0-befd-715cb8a987e3" containerName="glance-httpd" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.982937 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="037d1b1d-fa9c-4a8f-8403-46de0acfa1d8" containerName="dnsmasq-dns" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.982962 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="50c0ed30-282a-446b-b0cc-f201e07cd2b5" containerName="glance-log" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.982984 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdda5015-0c28-4ab0-befd-715cb8a987e3" containerName="glance-log" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.983000 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="50c0ed30-282a-446b-b0cc-f201e07cd2b5" containerName="glance-httpd" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.983009 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdda5015-0c28-4ab0-befd-715cb8a987e3" containerName="glance-httpd" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.983935 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.988092 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.988425 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.027335 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.062888 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-combined-ca-bundle\") pod \"fdda5015-0c28-4ab0-befd-715cb8a987e3\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.062988 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"fdda5015-0c28-4ab0-befd-715cb8a987e3\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.063041 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-scripts\") pod \"fdda5015-0c28-4ab0-befd-715cb8a987e3\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.063079 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdda5015-0c28-4ab0-befd-715cb8a987e3-httpd-run\") pod \"fdda5015-0c28-4ab0-befd-715cb8a987e3\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.063096 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-config-data\") pod \"fdda5015-0c28-4ab0-befd-715cb8a987e3\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.063147 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpxq8\" (UniqueName: \"kubernetes.io/projected/fdda5015-0c28-4ab0-befd-715cb8a987e3-kube-api-access-dpxq8\") pod \"fdda5015-0c28-4ab0-befd-715cb8a987e3\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.063227 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdda5015-0c28-4ab0-befd-715cb8a987e3-logs\") pod \"fdda5015-0c28-4ab0-befd-715cb8a987e3\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.063960 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdda5015-0c28-4ab0-befd-715cb8a987e3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fdda5015-0c28-4ab0-befd-715cb8a987e3" (UID: "fdda5015-0c28-4ab0-befd-715cb8a987e3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.064667 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdda5015-0c28-4ab0-befd-715cb8a987e3-logs" (OuterVolumeSpecName: "logs") pod "fdda5015-0c28-4ab0-befd-715cb8a987e3" (UID: "fdda5015-0c28-4ab0-befd-715cb8a987e3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.104938 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.172057 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-config-data\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.172121 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7289daaa-acda-4854-a506-c6cc429562d3-logs\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.172154 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.172235 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7289daaa-acda-4854-a506-c6cc429562d3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.172254 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5xlg\" (UniqueName: \"kubernetes.io/projected/7289daaa-acda-4854-a506-c6cc429562d3-kube-api-access-p5xlg\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.172323 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.172502 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.172871 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-scripts\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.173148 4593 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdda5015-0c28-4ab0-befd-715cb8a987e3-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.173328 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdda5015-0c28-4ab0-befd-715cb8a987e3-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.215135 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-scripts" (OuterVolumeSpecName: "scripts") pod "fdda5015-0c28-4ab0-befd-715cb8a987e3" (UID: "fdda5015-0c28-4ab0-befd-715cb8a987e3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.215272 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "fdda5015-0c28-4ab0-befd-715cb8a987e3" (UID: "fdda5015-0c28-4ab0-befd-715cb8a987e3"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.226916 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="037d1b1d-fa9c-4a8f-8403-46de0acfa1d8" path="/var/lib/kubelet/pods/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8/volumes" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.230631 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50c0ed30-282a-446b-b0cc-f201e07cd2b5" path="/var/lib/kubelet/pods/50c0ed30-282a-446b-b0cc-f201e07cd2b5/volumes" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.238046 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdda5015-0c28-4ab0-befd-715cb8a987e3","Type":"ContainerDied","Data":"e6cec3c9c1b7ab68aa8f259aa6f901629b1b0285ad0c92831ba0cfa791bf0229"} Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.255050 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdda5015-0c28-4ab0-befd-715cb8a987e3-kube-api-access-dpxq8" (OuterVolumeSpecName: "kube-api-access-dpxq8") pod "fdda5015-0c28-4ab0-befd-715cb8a987e3" (UID: "fdda5015-0c28-4ab0-befd-715cb8a987e3"). InnerVolumeSpecName "kube-api-access-dpxq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.280514 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-scripts\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.280584 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-config-data\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.280604 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7289daaa-acda-4854-a506-c6cc429562d3-logs\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.280621 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.280674 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7289daaa-acda-4854-a506-c6cc429562d3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.280688 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5xlg\" (UniqueName: \"kubernetes.io/projected/7289daaa-acda-4854-a506-c6cc429562d3-kube-api-access-p5xlg\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.280722 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.280763 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.280849 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpxq8\" (UniqueName: \"kubernetes.io/projected/fdda5015-0c28-4ab0-befd-715cb8a987e3-kube-api-access-dpxq8\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.280872 4593 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.280881 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.285053 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.290456 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7289daaa-acda-4854-a506-c6cc429562d3-logs\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.290852 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7289daaa-acda-4854-a506-c6cc429562d3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.381494 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-scripts\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.384063 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.384259 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.386243 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5xlg\" (UniqueName: \"kubernetes.io/projected/7289daaa-acda-4854-a506-c6cc429562d3-kube-api-access-p5xlg\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.386752 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-config-data\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.412449 4593 scope.go:117] "RemoveContainer" containerID="715e647703b26a590bd9c34541d425220134bcfb800847b738a35414acceb9c1" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.470851 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.632816 4593 scope.go:117] "RemoveContainer" containerID="6349a6f4b6a687bdb26600a20fac4d160a672a92f042b686a1b78088c5890856" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.636623 4593 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.665861 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.692360 4593 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.931692 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdda5015-0c28-4ab0-befd-715cb8a987e3" (UID: "fdda5015-0c28-4ab0-befd-715cb8a987e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.982856 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-config-data" (OuterVolumeSpecName: "config-data") pod "fdda5015-0c28-4ab0-befd-715cb8a987e3" (UID: "fdda5015-0c28-4ab0-befd-715cb8a987e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.002009 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.002047 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.247205 4593 scope.go:117] "RemoveContainer" containerID="7c8b245da461c9d7cfe5494a143681450b958c3b59e7d2c1a13483a01ca4bb90" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.247272 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-766cf76c8b-cjg59" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.247315 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-766cf76c8b-cjg59" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.278926 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" event={"ID":"5f3c398f-928a-4f7e-9e76-6978b8a3673e","Type":"ContainerStarted","Data":"b01e456a31a7e0718ddba3b0cda0b5959a52ff29b15286c62a6291d2d96dae2b"} Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.289191 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.314692 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" event={"ID":"cad93c02-cde3-4a50-9f89-1800d0436d2d","Type":"ContainerStarted","Data":"a493fd10106184253e493388b4dfa71c635ecf5329b1a15c3ccde9fe523d1e73"} Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.316005 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.330837 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5947965cdc-wl48v" event={"ID":"564d3b50-7cec-4913-bac8-64af532aa32f","Type":"ContainerStarted","Data":"cfcd2c8094e422e569c7ba510cc7201f5fa7af1a26ec251ecbe01c2340b45374"} Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.352876 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.381717 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.383586 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.393389 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.393597 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.428308 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.429370 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" podStartSLOduration=8.429345486 podStartE2EDuration="8.429345486s" podCreationTimestamp="2026-01-29 11:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:17:40.35794614 +0000 UTC m=+1126.230980351" watchObservedRunningTime="2026-01-29 11:17:40.429345486 +0000 UTC m=+1126.302379677" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.541773 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9nh2\" (UniqueName: \"kubernetes.io/projected/911edffc-f4d0-40bf-b49c-c1ab592dd258-kube-api-access-z9nh2\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.541819 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-config-data\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.541861 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/911edffc-f4d0-40bf-b49c-c1ab592dd258-logs\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.541902 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.541940 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-scripts\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.542012 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/911edffc-f4d0-40bf-b49c-c1ab592dd258-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.542054 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.542087 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.643585 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/911edffc-f4d0-40bf-b49c-c1ab592dd258-logs\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.643768 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.646325 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-scripts\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.646445 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/911edffc-f4d0-40bf-b49c-c1ab592dd258-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.646493 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.646534 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.646572 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9nh2\" (UniqueName: \"kubernetes.io/projected/911edffc-f4d0-40bf-b49c-c1ab592dd258-kube-api-access-z9nh2\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.646609 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-config-data\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.647313 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/911edffc-f4d0-40bf-b49c-c1ab592dd258-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.648785 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/911edffc-f4d0-40bf-b49c-c1ab592dd258-logs\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.652434 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.671719 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-scripts\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.678413 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9nh2\" (UniqueName: \"kubernetes.io/projected/911edffc-f4d0-40bf-b49c-c1ab592dd258-kube-api-access-z9nh2\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.684335 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-config-data\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.693090 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.713101 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.747836 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.774852 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.819096 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:17:40 crc kubenswrapper[4593]: W0129 11:17:40.913225 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7289daaa_acda_4854_a506_c6cc429562d3.slice/crio-db9797e87c1781dc943e7d1006dfa6fe3eaaf5edc0bffd04dc66ed3f512449a4 WatchSource:0}: Error finding container db9797e87c1781dc943e7d1006dfa6fe3eaaf5edc0bffd04dc66ed3f512449a4: Status 404 returned error can't find the container with id db9797e87c1781dc943e7d1006dfa6fe3eaaf5edc0bffd04dc66ed3f512449a4 Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:41.155766 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-766cf76c8b-cjg59" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:41.179509 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdda5015-0c28-4ab0-befd-715cb8a987e3" path="/var/lib/kubelet/pods/fdda5015-0c28-4ab0-befd-715cb8a987e3/volumes" Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:41.200219 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-766cf76c8b-cjg59" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:41.376310 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"95847704-1027-4518-9f5c-cd663496b804","Type":"ContainerStarted","Data":"532ef2b08300e953556c4f80a0efbeeef65f13a2c78db2506158a85df92e08ac"} Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:41.376525 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="95847704-1027-4518-9f5c-cd663496b804" containerName="cinder-api-log" containerID="cri-o://6ca508da8e21ef8dd7d2c43f12f73a45b855f01c94f63172557349f3344fc6c9" gracePeriod=30 Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:41.376871 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:41.376894 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="95847704-1027-4518-9f5c-cd663496b804" containerName="cinder-api" containerID="cri-o://532ef2b08300e953556c4f80a0efbeeef65f13a2c78db2506158a85df92e08ac" gracePeriod=30 Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:41.383807 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5947965cdc-wl48v" event={"ID":"564d3b50-7cec-4913-bac8-64af532aa32f","Type":"ContainerStarted","Data":"ab5ff234cb486571f2ea563777120d15ec2665801fe297fa0f02a5645faa2e70"} Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:41.389139 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"10756552-28da-4e84-9c43-fb2be288e81f","Type":"ContainerStarted","Data":"49c7f116f6b968b8e92002d04be3944f190deaba5cfb0c87a84ff79e7f77d0cb"} Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:41.391399 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7289daaa-acda-4854-a506-c6cc429562d3","Type":"ContainerStarted","Data":"db9797e87c1781dc943e7d1006dfa6fe3eaaf5edc0bffd04dc66ed3f512449a4"} Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:41.393754 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"852a4805-5ddc-4a1d-a642-9d5e6bbb9206","Type":"ContainerStarted","Data":"a9f1fe703de62c9906cf5414628cb1871967b692dd15c7ec296d4900c7151a67"} Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:41.408780 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=9.408757732 podStartE2EDuration="9.408757732s" podCreationTimestamp="2026-01-29 11:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:17:41.399372132 +0000 UTC m=+1127.272406323" watchObservedRunningTime="2026-01-29 11:17:41.408757732 +0000 UTC m=+1127.281791933" Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:41.429495 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" event={"ID":"5f3c398f-928a-4f7e-9e76-6978b8a3673e","Type":"ContainerStarted","Data":"9f8ba3debfac9d511eedbf82e0f3be84890aaa0c424afc934876a51b18b17b56"} Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:41.699150 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5947965cdc-wl48v" podStartSLOduration=7.022099483 podStartE2EDuration="16.699127801s" podCreationTimestamp="2026-01-29 11:17:25 +0000 UTC" firstStartedPulling="2026-01-29 11:17:28.243471287 +0000 UTC m=+1114.116505478" lastFinishedPulling="2026-01-29 11:17:37.920499605 +0000 UTC m=+1123.793533796" observedRunningTime="2026-01-29 11:17:41.515100719 +0000 UTC m=+1127.388134910" watchObservedRunningTime="2026-01-29 11:17:41.699127801 +0000 UTC m=+1127.572162002" Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:41.732736 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" podStartSLOduration=6.5600329219999995 podStartE2EDuration="16.732716167s" podCreationTimestamp="2026-01-29 11:17:25 +0000 UTC" firstStartedPulling="2026-01-29 11:17:27.746650429 +0000 UTC m=+1113.619684630" lastFinishedPulling="2026-01-29 11:17:37.919333684 +0000 UTC m=+1123.792367875" observedRunningTime="2026-01-29 11:17:41.567244392 +0000 UTC m=+1127.440278583" watchObservedRunningTime="2026-01-29 11:17:41.732716167 +0000 UTC m=+1127.605750358" Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:42.512916 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"852a4805-5ddc-4a1d-a642-9d5e6bbb9206","Type":"ContainerStarted","Data":"aba14bdcb819b3097f623b10d1f889520b4a3ec8b94a23129679074b0158bb26"} Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:42.555271 4593 generic.go:334] "Generic (PLEG): container finished" podID="95847704-1027-4518-9f5c-cd663496b804" containerID="6ca508da8e21ef8dd7d2c43f12f73a45b855f01c94f63172557349f3344fc6c9" exitCode=143 Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:42.555359 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"95847704-1027-4518-9f5c-cd663496b804","Type":"ContainerDied","Data":"6ca508da8e21ef8dd7d2c43f12f73a45b855f01c94f63172557349f3344fc6c9"} Jan 29 11:17:43 crc kubenswrapper[4593]: I0129 11:17:43.248740 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:17:43 crc kubenswrapper[4593]: I0129 11:17:43.314497 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:43 crc kubenswrapper[4593]: I0129 11:17:43.417169 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-669db997bd-hhbcc"] Jan 29 11:17:43 crc kubenswrapper[4593]: I0129 11:17:43.417413 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-669db997bd-hhbcc" podUID="dcf8c6b2-659d-4fbb-82ef-d9749443f647" containerName="placement-log" containerID="cri-o://b2737a73be5d76fb8f211f8bf7e6f7f5d5df136a1e001d613ced73be513cce7c" gracePeriod=30 Jan 29 11:17:43 crc kubenswrapper[4593]: I0129 11:17:43.418104 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-669db997bd-hhbcc" podUID="dcf8c6b2-659d-4fbb-82ef-d9749443f647" containerName="placement-api" containerID="cri-o://fca879370bdf54a12b3a105098148973a13eddb0bbbb835f4a9653bb9e65ca80" gracePeriod=30 Jan 29 11:17:43 crc kubenswrapper[4593]: I0129 11:17:43.640459 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"911edffc-f4d0-40bf-b49c-c1ab592dd258","Type":"ContainerStarted","Data":"4bb371c1c9d2fcc4f80bfb03ebb66d3dd6167a7190179617153d4df635eb3592"} Jan 29 11:17:43 crc kubenswrapper[4593]: I0129 11:17:43.715683 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"10756552-28da-4e84-9c43-fb2be288e81f","Type":"ContainerStarted","Data":"24897273abec623fff6c526f0b856b7cfaaa9ed18d3e576b618b0daab55ab047"} Jan 29 11:17:43 crc kubenswrapper[4593]: I0129 11:17:43.756526 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7289daaa-acda-4854-a506-c6cc429562d3","Type":"ContainerStarted","Data":"90fb85235bc3606a7b4bb84b4b179cef3fafc0ce2eb0f3b29c3cc2eb08fb78b3"} Jan 29 11:17:43 crc kubenswrapper[4593]: I0129 11:17:43.762611 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.546937843 podStartE2EDuration="11.762586756s" podCreationTimestamp="2026-01-29 11:17:32 +0000 UTC" firstStartedPulling="2026-01-29 11:17:33.514048036 +0000 UTC m=+1119.387082227" lastFinishedPulling="2026-01-29 11:17:38.729696949 +0000 UTC m=+1124.602731140" observedRunningTime="2026-01-29 11:17:43.743568978 +0000 UTC m=+1129.616603169" watchObservedRunningTime="2026-01-29 11:17:43.762586756 +0000 UTC m=+1129.635620947" Jan 29 11:17:43 crc kubenswrapper[4593]: I0129 11:17:43.787573 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"852a4805-5ddc-4a1d-a642-9d5e6bbb9206","Type":"ContainerStarted","Data":"b0abb69f5e56bccd2bb62baeb61fd064ee7010eb36ba3b37edb2c69864a733d7"} Jan 29 11:17:43 crc kubenswrapper[4593]: I0129 11:17:43.800258 4593 generic.go:334] "Generic (PLEG): container finished" podID="dcf8c6b2-659d-4fbb-82ef-d9749443f647" containerID="b2737a73be5d76fb8f211f8bf7e6f7f5d5df136a1e001d613ced73be513cce7c" exitCode=143 Jan 29 11:17:43 crc kubenswrapper[4593]: I0129 11:17:43.800303 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-669db997bd-hhbcc" event={"ID":"dcf8c6b2-659d-4fbb-82ef-d9749443f647","Type":"ContainerDied","Data":"b2737a73be5d76fb8f211f8bf7e6f7f5d5df136a1e001d613ced73be513cce7c"} Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.195296 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-669db997bd-hhbcc" podUID="dcf8c6b2-659d-4fbb-82ef-d9749443f647" containerName="placement-api" probeResult="failure" output="Get \"https://10.217.0.149:8778/\": read tcp 10.217.0.2:54274->10.217.0.149:8778: read: connection reset by peer" Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.195301 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-669db997bd-hhbcc" podUID="dcf8c6b2-659d-4fbb-82ef-d9749443f647" containerName="placement-log" probeResult="failure" output="Get \"https://10.217.0.149:8778/\": read tcp 10.217.0.2:54282->10.217.0.149:8778: read: connection reset by peer" Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.500934 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-59844fc4b6-zctck" podUID="07d138d8-a5fa-4b77-80e5-924dba8de4c0" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.159:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.500935 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-59844fc4b6-zctck" podUID="07d138d8-a5fa-4b77-80e5-924dba8de4c0" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.159:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.818445 4593 generic.go:334] "Generic (PLEG): container finished" podID="dcf8c6b2-659d-4fbb-82ef-d9749443f647" containerID="fca879370bdf54a12b3a105098148973a13eddb0bbbb835f4a9653bb9e65ca80" exitCode=0 Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.818759 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-669db997bd-hhbcc" event={"ID":"dcf8c6b2-659d-4fbb-82ef-d9749443f647","Type":"ContainerDied","Data":"fca879370bdf54a12b3a105098148973a13eddb0bbbb835f4a9653bb9e65ca80"} Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.818787 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-669db997bd-hhbcc" event={"ID":"dcf8c6b2-659d-4fbb-82ef-d9749443f647","Type":"ContainerDied","Data":"32fdfc7881c963abaad68073c4d49c25e3c8cc05f9fcc814488ad8238d96326b"} Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.818797 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32fdfc7881c963abaad68073c4d49c25e3c8cc05f9fcc814488ad8238d96326b" Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.823814 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"911edffc-f4d0-40bf-b49c-c1ab592dd258","Type":"ContainerStarted","Data":"d9d7dd8976380d6486fd1b5f21789a9b38a5817e8ac2103c8d17ab8df8f5fe64"} Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.828299 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7289daaa-acda-4854-a506-c6cc429562d3","Type":"ContainerStarted","Data":"3293c2e1edd54e8ff7f4dc2cefd7cf058a429e32cd917bd68da12dc400ead3f5"} Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.850429 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.850405606 podStartE2EDuration="6.850405606s" podCreationTimestamp="2026-01-29 11:17:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:17:44.846991884 +0000 UTC m=+1130.720026075" watchObservedRunningTime="2026-01-29 11:17:44.850405606 +0000 UTC m=+1130.723439797" Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.909836 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.909927 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.910858 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"a15a1a862b6057b76f95edeb2bb41d937e5e017b829f9f7c6c63b71068d74996"} pod="openstack/horizon-fbf566cdb-kbm9z" containerMessage="Container horizon failed startup probe, will be restarted" Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.910904 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" containerID="cri-o://a15a1a862b6057b76f95edeb2bb41d937e5e017b829f9f7c6c63b71068d74996" gracePeriod=30 Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.938026 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.968532 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-config-data\") pod \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.980214 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-combined-ca-bundle\") pod \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.980476 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcf8c6b2-659d-4fbb-82ef-d9749443f647-logs\") pod \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.980688 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb55m\" (UniqueName: \"kubernetes.io/projected/dcf8c6b2-659d-4fbb-82ef-d9749443f647-kube-api-access-hb55m\") pod \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.981069 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-public-tls-certs\") pod \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.981147 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-internal-tls-certs\") pod \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.981209 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-scripts\") pod \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.984556 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcf8c6b2-659d-4fbb-82ef-d9749443f647-logs" (OuterVolumeSpecName: "logs") pod "dcf8c6b2-659d-4fbb-82ef-d9749443f647" (UID: "dcf8c6b2-659d-4fbb-82ef-d9749443f647"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.001384 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcf8c6b2-659d-4fbb-82ef-d9749443f647-kube-api-access-hb55m" (OuterVolumeSpecName: "kube-api-access-hb55m") pod "dcf8c6b2-659d-4fbb-82ef-d9749443f647" (UID: "dcf8c6b2-659d-4fbb-82ef-d9749443f647"). InnerVolumeSpecName "kube-api-access-hb55m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.020134 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-scripts" (OuterVolumeSpecName: "scripts") pod "dcf8c6b2-659d-4fbb-82ef-d9749443f647" (UID: "dcf8c6b2-659d-4fbb-82ef-d9749443f647"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.100252 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcf8c6b2-659d-4fbb-82ef-d9749443f647-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.100301 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb55m\" (UniqueName: \"kubernetes.io/projected/dcf8c6b2-659d-4fbb-82ef-d9749443f647-kube-api-access-hb55m\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.100320 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.100426 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5bdffb4784-5zp8q" podUID="be4a01cd-2eb7-48e8-8a7e-eb02f8851188" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.248052 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-766cf76c8b-cjg59" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.248777 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-766cf76c8b-cjg59" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.271407 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.272452 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"948ff5eda4c7a4e3a5023888e59c0f30a788f7ad09bc8aba86ab19e010a4eeb1"} pod="openstack/horizon-5bdffb4784-5zp8q" containerMessage="Container horizon failed startup probe, will be restarted" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.272495 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5bdffb4784-5zp8q" podUID="be4a01cd-2eb7-48e8-8a7e-eb02f8851188" containerName="horizon" containerID="cri-o://948ff5eda4c7a4e3a5023888e59c0f30a788f7ad09bc8aba86ab19e010a4eeb1" gracePeriod=30 Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.310445 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dcf8c6b2-659d-4fbb-82ef-d9749443f647" (UID: "dcf8c6b2-659d-4fbb-82ef-d9749443f647"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.321119 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.348767 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-config-data" (OuterVolumeSpecName: "config-data") pod "dcf8c6b2-659d-4fbb-82ef-d9749443f647" (UID: "dcf8c6b2-659d-4fbb-82ef-d9749443f647"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.389602 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "dcf8c6b2-659d-4fbb-82ef-d9749443f647" (UID: "dcf8c6b2-659d-4fbb-82ef-d9749443f647"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.428084 4593 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.428119 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.463205 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-59844fc4b6-zctck" podUID="07d138d8-a5fa-4b77-80e5-924dba8de4c0" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.159:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.463335 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-59844fc4b6-zctck" podUID="07d138d8-a5fa-4b77-80e5-924dba8de4c0" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.159:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.519575 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dcf8c6b2-659d-4fbb-82ef-d9749443f647" (UID: "dcf8c6b2-659d-4fbb-82ef-d9749443f647"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.529538 4593 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.878315 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.922433 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-669db997bd-hhbcc"] Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.937097 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-669db997bd-hhbcc"] Jan 29 11:17:46 crc kubenswrapper[4593]: I0129 11:17:46.198904 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-766cf76c8b-cjg59" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:46 crc kubenswrapper[4593]: I0129 11:17:46.241881 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-766cf76c8b-cjg59" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:46 crc kubenswrapper[4593]: I0129 11:17:46.438856 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:46 crc kubenswrapper[4593]: I0129 11:17:46.524230 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:46 crc kubenswrapper[4593]: I0129 11:17:46.907184 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"911edffc-f4d0-40bf-b49c-c1ab592dd258","Type":"ContainerStarted","Data":"964d34df183e187ec805f4ff554355a6b6ef2fc5d1f44b5ea4d74d26a5c58cdc"} Jan 29 11:17:46 crc kubenswrapper[4593]: I0129 11:17:46.936182 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.936159215 podStartE2EDuration="6.936159215s" podCreationTimestamp="2026-01-29 11:17:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:17:46.926195279 +0000 UTC m=+1132.799229480" watchObservedRunningTime="2026-01-29 11:17:46.936159215 +0000 UTC m=+1132.809193406" Jan 29 11:17:47 crc kubenswrapper[4593]: I0129 11:17:47.089982 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcf8c6b2-659d-4fbb-82ef-d9749443f647" path="/var/lib/kubelet/pods/dcf8c6b2-659d-4fbb-82ef-d9749443f647/volumes" Jan 29 11:17:47 crc kubenswrapper[4593]: I0129 11:17:47.564966 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 11:17:47 crc kubenswrapper[4593]: I0129 11:17:47.567610 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="10756552-28da-4e84-9c43-fb2be288e81f" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.161:8080/\": dial tcp 10.217.0.161:8080: connect: connection refused" Jan 29 11:17:47 crc kubenswrapper[4593]: I0129 11:17:47.698882 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:47 crc kubenswrapper[4593]: I0129 11:17:47.817580 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-kpbz6"] Jan 29 11:17:47 crc kubenswrapper[4593]: I0129 11:17:47.818429 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" podUID="8fb458d5-4cf6-41ed-bf24-cc63387a17f8" containerName="dnsmasq-dns" containerID="cri-o://d8c466a4721e4e80dcee5d6fc306b00a8e2528371b38488f0d7c1d298edbb2a3" gracePeriod=10 Jan 29 11:17:48 crc kubenswrapper[4593]: I0129 11:17:48.000726 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"852a4805-5ddc-4a1d-a642-9d5e6bbb9206","Type":"ContainerStarted","Data":"88b868d7da96b6b3e10186188d5bbc939be24d322cd5116219ae0adb17dbd928"} Jan 29 11:17:48 crc kubenswrapper[4593]: I0129 11:17:48.001084 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 11:17:48 crc kubenswrapper[4593]: I0129 11:17:48.066939 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.534677876 podStartE2EDuration="18.06691068s" podCreationTimestamp="2026-01-29 11:17:30 +0000 UTC" firstStartedPulling="2026-01-29 11:17:32.152156942 +0000 UTC m=+1118.025191143" lastFinishedPulling="2026-01-29 11:17:46.684389756 +0000 UTC m=+1132.557423947" observedRunningTime="2026-01-29 11:17:48.055934237 +0000 UTC m=+1133.928968428" watchObservedRunningTime="2026-01-29 11:17:48.06691068 +0000 UTC m=+1133.939944871" Jan 29 11:17:48 crc kubenswrapper[4593]: I0129 11:17:48.723457 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:17:48 crc kubenswrapper[4593]: I0129 11:17:48.798884 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66q5l\" (UniqueName: \"kubernetes.io/projected/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-kube-api-access-66q5l\") pod \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " Jan 29 11:17:48 crc kubenswrapper[4593]: I0129 11:17:48.798975 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-dns-swift-storage-0\") pod \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " Jan 29 11:17:48 crc kubenswrapper[4593]: I0129 11:17:48.799062 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-dns-svc\") pod \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " Jan 29 11:17:48 crc kubenswrapper[4593]: I0129 11:17:48.799093 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-ovsdbserver-nb\") pod \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " Jan 29 11:17:48 crc kubenswrapper[4593]: I0129 11:17:48.799189 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-ovsdbserver-sb\") pod \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " Jan 29 11:17:48 crc kubenswrapper[4593]: I0129 11:17:48.799223 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-config\") pod \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " Jan 29 11:17:48 crc kubenswrapper[4593]: I0129 11:17:48.855478 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-kube-api-access-66q5l" (OuterVolumeSpecName: "kube-api-access-66q5l") pod "8fb458d5-4cf6-41ed-bf24-cc63387a17f8" (UID: "8fb458d5-4cf6-41ed-bf24-cc63387a17f8"). InnerVolumeSpecName "kube-api-access-66q5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:17:48 crc kubenswrapper[4593]: I0129 11:17:48.902070 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66q5l\" (UniqueName: \"kubernetes.io/projected/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-kube-api-access-66q5l\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:48 crc kubenswrapper[4593]: I0129 11:17:48.943663 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8fb458d5-4cf6-41ed-bf24-cc63387a17f8" (UID: "8fb458d5-4cf6-41ed-bf24-cc63387a17f8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:17:48 crc kubenswrapper[4593]: I0129 11:17:48.955908 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8fb458d5-4cf6-41ed-bf24-cc63387a17f8" (UID: "8fb458d5-4cf6-41ed-bf24-cc63387a17f8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:17:48 crc kubenswrapper[4593]: I0129 11:17:48.982125 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8fb458d5-4cf6-41ed-bf24-cc63387a17f8" (UID: "8fb458d5-4cf6-41ed-bf24-cc63387a17f8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.006620 4593 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.006678 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.006692 4593 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.014981 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-config" (OuterVolumeSpecName: "config") pod "8fb458d5-4cf6-41ed-bf24-cc63387a17f8" (UID: "8fb458d5-4cf6-41ed-bf24-cc63387a17f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.017107 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8fb458d5-4cf6-41ed-bf24-cc63387a17f8" (UID: "8fb458d5-4cf6-41ed-bf24-cc63387a17f8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.063966 4593 generic.go:334] "Generic (PLEG): container finished" podID="8fb458d5-4cf6-41ed-bf24-cc63387a17f8" containerID="d8c466a4721e4e80dcee5d6fc306b00a8e2528371b38488f0d7c1d298edbb2a3" exitCode=0 Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.064792 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.065006 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" event={"ID":"8fb458d5-4cf6-41ed-bf24-cc63387a17f8","Type":"ContainerDied","Data":"d8c466a4721e4e80dcee5d6fc306b00a8e2528371b38488f0d7c1d298edbb2a3"} Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.065046 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" event={"ID":"8fb458d5-4cf6-41ed-bf24-cc63387a17f8","Type":"ContainerDied","Data":"27df2f7abd836abf6cd98d3ccb15264008f2c53f8cce156f8a156ba7ca552d82"} Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.065068 4593 scope.go:117] "RemoveContainer" containerID="d8c466a4721e4e80dcee5d6fc306b00a8e2528371b38488f0d7c1d298edbb2a3" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.114816 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.114851 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.189007 4593 scope.go:117] "RemoveContainer" containerID="b1b07f2017de0e2352ba6afacb58d27c6112126cb7e7975a5838969dfa72ee13" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.221702 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-kpbz6"] Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.240686 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-kpbz6"] Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.249866 4593 scope.go:117] "RemoveContainer" containerID="d8c466a4721e4e80dcee5d6fc306b00a8e2528371b38488f0d7c1d298edbb2a3" Jan 29 11:17:49 crc kubenswrapper[4593]: E0129 11:17:49.261877 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8c466a4721e4e80dcee5d6fc306b00a8e2528371b38488f0d7c1d298edbb2a3\": container with ID starting with d8c466a4721e4e80dcee5d6fc306b00a8e2528371b38488f0d7c1d298edbb2a3 not found: ID does not exist" containerID="d8c466a4721e4e80dcee5d6fc306b00a8e2528371b38488f0d7c1d298edbb2a3" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.261928 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8c466a4721e4e80dcee5d6fc306b00a8e2528371b38488f0d7c1d298edbb2a3"} err="failed to get container status \"d8c466a4721e4e80dcee5d6fc306b00a8e2528371b38488f0d7c1d298edbb2a3\": rpc error: code = NotFound desc = could not find container \"d8c466a4721e4e80dcee5d6fc306b00a8e2528371b38488f0d7c1d298edbb2a3\": container with ID starting with d8c466a4721e4e80dcee5d6fc306b00a8e2528371b38488f0d7c1d298edbb2a3 not found: ID does not exist" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.261974 4593 scope.go:117] "RemoveContainer" containerID="b1b07f2017de0e2352ba6afacb58d27c6112126cb7e7975a5838969dfa72ee13" Jan 29 11:17:49 crc kubenswrapper[4593]: E0129 11:17:49.265788 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1b07f2017de0e2352ba6afacb58d27c6112126cb7e7975a5838969dfa72ee13\": container with ID starting with b1b07f2017de0e2352ba6afacb58d27c6112126cb7e7975a5838969dfa72ee13 not found: ID does not exist" containerID="b1b07f2017de0e2352ba6afacb58d27c6112126cb7e7975a5838969dfa72ee13" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.265829 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1b07f2017de0e2352ba6afacb58d27c6112126cb7e7975a5838969dfa72ee13"} err="failed to get container status \"b1b07f2017de0e2352ba6afacb58d27c6112126cb7e7975a5838969dfa72ee13\": rpc error: code = NotFound desc = could not find container \"b1b07f2017de0e2352ba6afacb58d27c6112126cb7e7975a5838969dfa72ee13\": container with ID starting with b1b07f2017de0e2352ba6afacb58d27c6112126cb7e7975a5838969dfa72ee13 not found: ID does not exist" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.509859 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-59844fc4b6-zctck" podUID="07d138d8-a5fa-4b77-80e5-924dba8de4c0" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.159:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.510223 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-59844fc4b6-zctck" podUID="07d138d8-a5fa-4b77-80e5-924dba8de4c0" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.159:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.668201 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.668653 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.767649 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="c1755998-9149-49be-b10f-c4fe029728bc" containerName="galera" probeResult="failure" output="command timed out" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.842470 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.887555 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 11:17:50 crc kubenswrapper[4593]: I0129 11:17:50.075169 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 11:17:50 crc kubenswrapper[4593]: I0129 11:17:50.075681 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 11:17:50 crc kubenswrapper[4593]: I0129 11:17:50.473831 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-59844fc4b6-zctck" podUID="07d138d8-a5fa-4b77-80e5-924dba8de4c0" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.159:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:50 crc kubenswrapper[4593]: I0129 11:17:50.474162 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-59844fc4b6-zctck" podUID="07d138d8-a5fa-4b77-80e5-924dba8de4c0" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.159:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:50 crc kubenswrapper[4593]: I0129 11:17:50.498313 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:50 crc kubenswrapper[4593]: I0129 11:17:50.532078 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:50 crc kubenswrapper[4593]: I0129 11:17:50.620968 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-766cf76c8b-cjg59"] Jan 29 11:17:50 crc kubenswrapper[4593]: I0129 11:17:50.621441 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-766cf76c8b-cjg59" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api-log" containerID="cri-o://a01e77fb6bb6bed1e88e5489338322c67e46dee88919c812c4f49227de8602a4" gracePeriod=30 Jan 29 11:17:50 crc kubenswrapper[4593]: I0129 11:17:50.627591 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-766cf76c8b-cjg59" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api" containerID="cri-o://3f764c87c1c674ee266ec11d50ead3b253a7e265b0c6c1414e01734443361b53" gracePeriod=30 Jan 29 11:17:50 crc kubenswrapper[4593]: I0129 11:17:50.775947 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 11:17:50 crc kubenswrapper[4593]: I0129 11:17:50.776293 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 11:17:50 crc kubenswrapper[4593]: I0129 11:17:50.974041 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 11:17:50 crc kubenswrapper[4593]: I0129 11:17:50.993325 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 11:17:51 crc kubenswrapper[4593]: I0129 11:17:51.096507 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fb458d5-4cf6-41ed-bf24-cc63387a17f8" path="/var/lib/kubelet/pods/8fb458d5-4cf6-41ed-bf24-cc63387a17f8/volumes" Jan 29 11:17:51 crc kubenswrapper[4593]: I0129 11:17:51.107815 4593 generic.go:334] "Generic (PLEG): container finished" podID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerID="a01e77fb6bb6bed1e88e5489338322c67e46dee88919c812c4f49227de8602a4" exitCode=143 Jan 29 11:17:51 crc kubenswrapper[4593]: I0129 11:17:51.109049 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-766cf76c8b-cjg59" event={"ID":"f5d54c2a-3590-4623-8641-e3906d9ef79e","Type":"ContainerDied","Data":"a01e77fb6bb6bed1e88e5489338322c67e46dee88919c812c4f49227de8602a4"} Jan 29 11:17:51 crc kubenswrapper[4593]: I0129 11:17:51.109180 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 11:17:51 crc kubenswrapper[4593]: I0129 11:17:51.109243 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 11:17:52 crc kubenswrapper[4593]: I0129 11:17:52.114974 4593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:17:52 crc kubenswrapper[4593]: I0129 11:17:52.115204 4593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:17:52 crc kubenswrapper[4593]: I0129 11:17:52.565781 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="10756552-28da-4e84-9c43-fb2be288e81f" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.161:8080/\": dial tcp 10.217.0.161:8080: connect: connection refused" Jan 29 11:17:53 crc kubenswrapper[4593]: I0129 11:17:53.123499 4593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:17:53 crc kubenswrapper[4593]: I0129 11:17:53.124395 4593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:17:53 crc kubenswrapper[4593]: I0129 11:17:53.350876 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="95847704-1027-4518-9f5c-cd663496b804" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.163:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:53 crc kubenswrapper[4593]: I0129 11:17:53.846118 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.147184 4593 generic.go:334] "Generic (PLEG): container finished" podID="1563c063-cd19-4793-97c0-45ca3e4a3e0c" containerID="b6f550864b30cf24b91a51e513d7e513cf9d2ef7137812c6edc720f9813967f9" exitCode=0 Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.147285 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qt4jn" event={"ID":"1563c063-cd19-4793-97c0-45ca3e4a3e0c","Type":"ContainerDied","Data":"b6f550864b30cf24b91a51e513d7e513cf9d2ef7137812c6edc720f9813967f9"} Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.414979 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 29 11:17:55 crc kubenswrapper[4593]: E0129 11:17:55.415383 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb458d5-4cf6-41ed-bf24-cc63387a17f8" containerName="init" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.415403 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb458d5-4cf6-41ed-bf24-cc63387a17f8" containerName="init" Jan 29 11:17:55 crc kubenswrapper[4593]: E0129 11:17:55.415415 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb458d5-4cf6-41ed-bf24-cc63387a17f8" containerName="dnsmasq-dns" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.415423 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb458d5-4cf6-41ed-bf24-cc63387a17f8" containerName="dnsmasq-dns" Jan 29 11:17:55 crc kubenswrapper[4593]: E0129 11:17:55.415439 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcf8c6b2-659d-4fbb-82ef-d9749443f647" containerName="placement-api" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.415446 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcf8c6b2-659d-4fbb-82ef-d9749443f647" containerName="placement-api" Jan 29 11:17:55 crc kubenswrapper[4593]: E0129 11:17:55.415472 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcf8c6b2-659d-4fbb-82ef-d9749443f647" containerName="placement-log" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.415478 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcf8c6b2-659d-4fbb-82ef-d9749443f647" containerName="placement-log" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.415686 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcf8c6b2-659d-4fbb-82ef-d9749443f647" containerName="placement-api" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.415707 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcf8c6b2-659d-4fbb-82ef-d9749443f647" containerName="placement-log" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.415719 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fb458d5-4cf6-41ed-bf24-cc63387a17f8" containerName="dnsmasq-dns" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.416381 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.430376 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.430626 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-pbt57" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.441691 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.463249 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/220bdfcb-98c4-4c78-8d95-ea64edfaf1ab-openstack-config-secret\") pod \"openstackclient\" (UID: \"220bdfcb-98c4-4c78-8d95-ea64edfaf1ab\") " pod="openstack/openstackclient" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.463382 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/220bdfcb-98c4-4c78-8d95-ea64edfaf1ab-openstack-config\") pod \"openstackclient\" (UID: \"220bdfcb-98c4-4c78-8d95-ea64edfaf1ab\") " pod="openstack/openstackclient" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.463410 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgpjd\" (UniqueName: \"kubernetes.io/projected/220bdfcb-98c4-4c78-8d95-ea64edfaf1ab-kube-api-access-tgpjd\") pod \"openstackclient\" (UID: \"220bdfcb-98c4-4c78-8d95-ea64edfaf1ab\") " pod="openstack/openstackclient" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.463426 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/220bdfcb-98c4-4c78-8d95-ea64edfaf1ab-combined-ca-bundle\") pod \"openstackclient\" (UID: \"220bdfcb-98c4-4c78-8d95-ea64edfaf1ab\") " pod="openstack/openstackclient" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.466228 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.565171 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/220bdfcb-98c4-4c78-8d95-ea64edfaf1ab-openstack-config\") pod \"openstackclient\" (UID: \"220bdfcb-98c4-4c78-8d95-ea64edfaf1ab\") " pod="openstack/openstackclient" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.565218 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/220bdfcb-98c4-4c78-8d95-ea64edfaf1ab-combined-ca-bundle\") pod \"openstackclient\" (UID: \"220bdfcb-98c4-4c78-8d95-ea64edfaf1ab\") " pod="openstack/openstackclient" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.565235 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgpjd\" (UniqueName: \"kubernetes.io/projected/220bdfcb-98c4-4c78-8d95-ea64edfaf1ab-kube-api-access-tgpjd\") pod \"openstackclient\" (UID: \"220bdfcb-98c4-4c78-8d95-ea64edfaf1ab\") " pod="openstack/openstackclient" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.565302 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/220bdfcb-98c4-4c78-8d95-ea64edfaf1ab-openstack-config-secret\") pod \"openstackclient\" (UID: \"220bdfcb-98c4-4c78-8d95-ea64edfaf1ab\") " pod="openstack/openstackclient" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.567138 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/220bdfcb-98c4-4c78-8d95-ea64edfaf1ab-openstack-config\") pod \"openstackclient\" (UID: \"220bdfcb-98c4-4c78-8d95-ea64edfaf1ab\") " pod="openstack/openstackclient" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.573324 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/220bdfcb-98c4-4c78-8d95-ea64edfaf1ab-openstack-config-secret\") pod \"openstackclient\" (UID: \"220bdfcb-98c4-4c78-8d95-ea64edfaf1ab\") " pod="openstack/openstackclient" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.589437 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/220bdfcb-98c4-4c78-8d95-ea64edfaf1ab-combined-ca-bundle\") pod \"openstackclient\" (UID: \"220bdfcb-98c4-4c78-8d95-ea64edfaf1ab\") " pod="openstack/openstackclient" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.594381 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgpjd\" (UniqueName: \"kubernetes.io/projected/220bdfcb-98c4-4c78-8d95-ea64edfaf1ab-kube-api-access-tgpjd\") pod \"openstackclient\" (UID: \"220bdfcb-98c4-4c78-8d95-ea64edfaf1ab\") " pod="openstack/openstackclient" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.789046 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 11:17:56 crc kubenswrapper[4593]: I0129 11:17:56.237729 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-766cf76c8b-cjg59" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:56 crc kubenswrapper[4593]: I0129 11:17:56.237734 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-766cf76c8b-cjg59" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:56 crc kubenswrapper[4593]: I0129 11:17:56.648280 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 11:17:56 crc kubenswrapper[4593]: I0129 11:17:56.660804 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-766cf76c8b-cjg59" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": read tcp 10.217.0.2:38732->10.217.0.158:9311: read: connection reset by peer" Jan 29 11:17:56 crc kubenswrapper[4593]: I0129 11:17:56.661089 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-766cf76c8b-cjg59" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": read tcp 10.217.0.2:38716->10.217.0.158:9311: read: connection reset by peer" Jan 29 11:17:56 crc kubenswrapper[4593]: I0129 11:17:56.893020 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qt4jn" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.008925 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1563c063-cd19-4793-97c0-45ca3e4a3e0c-config\") pod \"1563c063-cd19-4793-97c0-45ca3e4a3e0c\" (UID: \"1563c063-cd19-4793-97c0-45ca3e4a3e0c\") " Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.009360 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59ccv\" (UniqueName: \"kubernetes.io/projected/1563c063-cd19-4793-97c0-45ca3e4a3e0c-kube-api-access-59ccv\") pod \"1563c063-cd19-4793-97c0-45ca3e4a3e0c\" (UID: \"1563c063-cd19-4793-97c0-45ca3e4a3e0c\") " Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.009413 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1563c063-cd19-4793-97c0-45ca3e4a3e0c-combined-ca-bundle\") pod \"1563c063-cd19-4793-97c0-45ca3e4a3e0c\" (UID: \"1563c063-cd19-4793-97c0-45ca3e4a3e0c\") " Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.048200 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1563c063-cd19-4793-97c0-45ca3e4a3e0c-kube-api-access-59ccv" (OuterVolumeSpecName: "kube-api-access-59ccv") pod "1563c063-cd19-4793-97c0-45ca3e4a3e0c" (UID: "1563c063-cd19-4793-97c0-45ca3e4a3e0c"). InnerVolumeSpecName "kube-api-access-59ccv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.098829 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1563c063-cd19-4793-97c0-45ca3e4a3e0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1563c063-cd19-4793-97c0-45ca3e4a3e0c" (UID: "1563c063-cd19-4793-97c0-45ca3e4a3e0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.099167 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1563c063-cd19-4793-97c0-45ca3e4a3e0c-config" (OuterVolumeSpecName: "config") pod "1563c063-cd19-4793-97c0-45ca3e4a3e0c" (UID: "1563c063-cd19-4793-97c0-45ca3e4a3e0c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.112583 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1563c063-cd19-4793-97c0-45ca3e4a3e0c-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.112621 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59ccv\" (UniqueName: \"kubernetes.io/projected/1563c063-cd19-4793-97c0-45ca3e4a3e0c-kube-api-access-59ccv\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.112657 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1563c063-cd19-4793-97c0-45ca3e4a3e0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.157712 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.214344 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5d54c2a-3590-4623-8641-e3906d9ef79e-logs\") pod \"f5d54c2a-3590-4623-8641-e3906d9ef79e\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.214401 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-config-data\") pod \"f5d54c2a-3590-4623-8641-e3906d9ef79e\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.214447 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf5mb\" (UniqueName: \"kubernetes.io/projected/f5d54c2a-3590-4623-8641-e3906d9ef79e-kube-api-access-bf5mb\") pod \"f5d54c2a-3590-4623-8641-e3906d9ef79e\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.214472 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-config-data-custom\") pod \"f5d54c2a-3590-4623-8641-e3906d9ef79e\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.214540 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-combined-ca-bundle\") pod \"f5d54c2a-3590-4623-8641-e3906d9ef79e\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.216901 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5d54c2a-3590-4623-8641-e3906d9ef79e-logs" (OuterVolumeSpecName: "logs") pod "f5d54c2a-3590-4623-8641-e3906d9ef79e" (UID: "f5d54c2a-3590-4623-8641-e3906d9ef79e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.222756 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5d54c2a-3590-4623-8641-e3906d9ef79e-kube-api-access-bf5mb" (OuterVolumeSpecName: "kube-api-access-bf5mb") pod "f5d54c2a-3590-4623-8641-e3906d9ef79e" (UID: "f5d54c2a-3590-4623-8641-e3906d9ef79e"). InnerVolumeSpecName "kube-api-access-bf5mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.229042 4593 generic.go:334] "Generic (PLEG): container finished" podID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerID="3f764c87c1c674ee266ec11d50ead3b253a7e265b0c6c1414e01734443361b53" exitCode=0 Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.229190 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.229348 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-766cf76c8b-cjg59" event={"ID":"f5d54c2a-3590-4623-8641-e3906d9ef79e","Type":"ContainerDied","Data":"3f764c87c1c674ee266ec11d50ead3b253a7e265b0c6c1414e01734443361b53"} Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.229393 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-766cf76c8b-cjg59" event={"ID":"f5d54c2a-3590-4623-8641-e3906d9ef79e","Type":"ContainerDied","Data":"ab6230e4600dcb9af699c78e8e565ba5926552d85dcff4c655fbdfc2c4ef02b3"} Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.229411 4593 scope.go:117] "RemoveContainer" containerID="3f764c87c1c674ee266ec11d50ead3b253a7e265b0c6c1414e01734443361b53" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.231052 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"220bdfcb-98c4-4c78-8d95-ea64edfaf1ab","Type":"ContainerStarted","Data":"307341a79971f8d77af36b3ff21c83ffc9327dc92bd703679c1d5bcd8132b20d"} Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.232782 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f5d54c2a-3590-4623-8641-e3906d9ef79e" (UID: "f5d54c2a-3590-4623-8641-e3906d9ef79e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.241508 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qt4jn" event={"ID":"1563c063-cd19-4793-97c0-45ca3e4a3e0c","Type":"ContainerDied","Data":"e190e45570748f76e4003c2271bb97bb9945d02157bf9978762b8a5417306bd1"} Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.241557 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e190e45570748f76e4003c2271bb97bb9945d02157bf9978762b8a5417306bd1" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.241644 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qt4jn" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.259227 4593 scope.go:117] "RemoveContainer" containerID="a01e77fb6bb6bed1e88e5489338322c67e46dee88919c812c4f49227de8602a4" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.292196 4593 scope.go:117] "RemoveContainer" containerID="3f764c87c1c674ee266ec11d50ead3b253a7e265b0c6c1414e01734443361b53" Jan 29 11:17:57 crc kubenswrapper[4593]: E0129 11:17:57.294605 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f764c87c1c674ee266ec11d50ead3b253a7e265b0c6c1414e01734443361b53\": container with ID starting with 3f764c87c1c674ee266ec11d50ead3b253a7e265b0c6c1414e01734443361b53 not found: ID does not exist" containerID="3f764c87c1c674ee266ec11d50ead3b253a7e265b0c6c1414e01734443361b53" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.294796 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f764c87c1c674ee266ec11d50ead3b253a7e265b0c6c1414e01734443361b53"} err="failed to get container status \"3f764c87c1c674ee266ec11d50ead3b253a7e265b0c6c1414e01734443361b53\": rpc error: code = NotFound desc = could not find container \"3f764c87c1c674ee266ec11d50ead3b253a7e265b0c6c1414e01734443361b53\": container with ID starting with 3f764c87c1c674ee266ec11d50ead3b253a7e265b0c6c1414e01734443361b53 not found: ID does not exist" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.294901 4593 scope.go:117] "RemoveContainer" containerID="a01e77fb6bb6bed1e88e5489338322c67e46dee88919c812c4f49227de8602a4" Jan 29 11:17:57 crc kubenswrapper[4593]: E0129 11:17:57.295520 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a01e77fb6bb6bed1e88e5489338322c67e46dee88919c812c4f49227de8602a4\": container with ID starting with a01e77fb6bb6bed1e88e5489338322c67e46dee88919c812c4f49227de8602a4 not found: ID does not exist" containerID="a01e77fb6bb6bed1e88e5489338322c67e46dee88919c812c4f49227de8602a4" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.295563 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a01e77fb6bb6bed1e88e5489338322c67e46dee88919c812c4f49227de8602a4"} err="failed to get container status \"a01e77fb6bb6bed1e88e5489338322c67e46dee88919c812c4f49227de8602a4\": rpc error: code = NotFound desc = could not find container \"a01e77fb6bb6bed1e88e5489338322c67e46dee88919c812c4f49227de8602a4\": container with ID starting with a01e77fb6bb6bed1e88e5489338322c67e46dee88919c812c4f49227de8602a4 not found: ID does not exist" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.306341 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5d54c2a-3590-4623-8641-e3906d9ef79e" (UID: "f5d54c2a-3590-4623-8641-e3906d9ef79e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.310988 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-config-data" (OuterVolumeSpecName: "config-data") pod "f5d54c2a-3590-4623-8641-e3906d9ef79e" (UID: "f5d54c2a-3590-4623-8641-e3906d9ef79e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.318008 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.318049 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5d54c2a-3590-4623-8641-e3906d9ef79e-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.318064 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.318075 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf5mb\" (UniqueName: \"kubernetes.io/projected/f5d54c2a-3590-4623-8641-e3906d9ef79e-kube-api-access-bf5mb\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.318089 4593 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.514392 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9hb8w"] Jan 29 11:17:57 crc kubenswrapper[4593]: E0129 11:17:57.514762 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api-log" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.514786 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api-log" Jan 29 11:17:57 crc kubenswrapper[4593]: E0129 11:17:57.514796 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.514802 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api" Jan 29 11:17:57 crc kubenswrapper[4593]: E0129 11:17:57.514834 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1563c063-cd19-4793-97c0-45ca3e4a3e0c" containerName="neutron-db-sync" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.514840 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="1563c063-cd19-4793-97c0-45ca3e4a3e0c" containerName="neutron-db-sync" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.514994 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="1563c063-cd19-4793-97c0-45ca3e4a3e0c" containerName="neutron-db-sync" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.515006 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api-log" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.515021 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.515875 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.606570 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9hb8w"] Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.623388 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-766cf76c8b-cjg59"] Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.626296 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.626358 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.626395 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.626453 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-config\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.626496 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbtth\" (UniqueName: \"kubernetes.io/projected/7aadd015-f714-41cf-b532-396d9f5f3946-kube-api-access-xbtth\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.626522 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-dns-svc\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.649711 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-766cf76c8b-cjg59"] Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.702069 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5dc77db4b8-s2bq6"] Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.716291 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.722542 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-xg5l8" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.724103 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.724270 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.724454 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.728510 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-config\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.728554 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbtth\" (UniqueName: \"kubernetes.io/projected/7aadd015-f714-41cf-b532-396d9f5f3946-kube-api-access-xbtth\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.728598 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-dns-svc\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.728682 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.728730 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.728760 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.730008 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-dns-svc\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.731057 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.736694 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5dc77db4b8-s2bq6"] Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.736765 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.737835 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-config\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.738061 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.804540 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbtth\" (UniqueName: \"kubernetes.io/projected/7aadd015-f714-41cf-b532-396d9f5f3946-kube-api-access-xbtth\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.844240 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-ovndb-tls-certs\") pod \"neutron-5dc77db4b8-s2bq6\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.844316 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-config\") pod \"neutron-5dc77db4b8-s2bq6\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.844416 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkjl6\" (UniqueName: \"kubernetes.io/projected/df8e6616-b9af-427f-9daa-d62ee3cb24d3-kube-api-access-mkjl6\") pod \"neutron-5dc77db4b8-s2bq6\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.844766 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-httpd-config\") pod \"neutron-5dc77db4b8-s2bq6\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.844822 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-combined-ca-bundle\") pod \"neutron-5dc77db4b8-s2bq6\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.863741 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.947944 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-httpd-config\") pod \"neutron-5dc77db4b8-s2bq6\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.948000 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-combined-ca-bundle\") pod \"neutron-5dc77db4b8-s2bq6\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.948041 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-ovndb-tls-certs\") pod \"neutron-5dc77db4b8-s2bq6\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.948065 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-config\") pod \"neutron-5dc77db4b8-s2bq6\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.948101 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkjl6\" (UniqueName: \"kubernetes.io/projected/df8e6616-b9af-427f-9daa-d62ee3cb24d3-kube-api-access-mkjl6\") pod \"neutron-5dc77db4b8-s2bq6\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.956580 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-config\") pod \"neutron-5dc77db4b8-s2bq6\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.959664 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-combined-ca-bundle\") pod \"neutron-5dc77db4b8-s2bq6\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.959811 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-httpd-config\") pod \"neutron-5dc77db4b8-s2bq6\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.981384 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkjl6\" (UniqueName: \"kubernetes.io/projected/df8e6616-b9af-427f-9daa-d62ee3cb24d3-kube-api-access-mkjl6\") pod \"neutron-5dc77db4b8-s2bq6\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.982134 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-ovndb-tls-certs\") pod \"neutron-5dc77db4b8-s2bq6\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:58 crc kubenswrapper[4593]: I0129 11:17:58.077054 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:58 crc kubenswrapper[4593]: I0129 11:17:58.391969 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="95847704-1027-4518-9f5c-cd663496b804" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.163:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:58 crc kubenswrapper[4593]: I0129 11:17:58.392522 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 11:17:58 crc kubenswrapper[4593]: I0129 11:17:58.505393 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 11:17:58 crc kubenswrapper[4593]: I0129 11:17:58.608031 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9hb8w"] Jan 29 11:17:58 crc kubenswrapper[4593]: I0129 11:17:58.916621 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5dc77db4b8-s2bq6"] Jan 29 11:17:58 crc kubenswrapper[4593]: W0129 11:17:58.945103 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf8e6616_b9af_427f_9daa_d62ee3cb24d3.slice/crio-88035d7e970cd02ad4e71f38ef640ad02fc3f7e36a8669ad9dc26d692493f526 WatchSource:0}: Error finding container 88035d7e970cd02ad4e71f38ef640ad02fc3f7e36a8669ad9dc26d692493f526: Status 404 returned error can't find the container with id 88035d7e970cd02ad4e71f38ef640ad02fc3f7e36a8669ad9dc26d692493f526 Jan 29 11:17:58 crc kubenswrapper[4593]: I0129 11:17:58.999511 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 11:17:58 crc kubenswrapper[4593]: I0129 11:17:58.999652 4593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:17:59 crc kubenswrapper[4593]: I0129 11:17:59.104232 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" path="/var/lib/kubelet/pods/f5d54c2a-3590-4623-8641-e3906d9ef79e/volumes" Jan 29 11:17:59 crc kubenswrapper[4593]: I0129 11:17:59.294284 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dc77db4b8-s2bq6" event={"ID":"df8e6616-b9af-427f-9daa-d62ee3cb24d3","Type":"ContainerStarted","Data":"88035d7e970cd02ad4e71f38ef640ad02fc3f7e36a8669ad9dc26d692493f526"} Jan 29 11:17:59 crc kubenswrapper[4593]: I0129 11:17:59.300974 4593 generic.go:334] "Generic (PLEG): container finished" podID="7aadd015-f714-41cf-b532-396d9f5f3946" containerID="d7d10b40887ad7cb3695100bfd7e2e09a54897e25591da02ac46e6c0d27cc415" exitCode=0 Jan 29 11:17:59 crc kubenswrapper[4593]: I0129 11:17:59.301241 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="10756552-28da-4e84-9c43-fb2be288e81f" containerName="cinder-scheduler" containerID="cri-o://49c7f116f6b968b8e92002d04be3944f190deaba5cfb0c87a84ff79e7f77d0cb" gracePeriod=30 Jan 29 11:17:59 crc kubenswrapper[4593]: I0129 11:17:59.301929 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" event={"ID":"7aadd015-f714-41cf-b532-396d9f5f3946","Type":"ContainerDied","Data":"d7d10b40887ad7cb3695100bfd7e2e09a54897e25591da02ac46e6c0d27cc415"} Jan 29 11:17:59 crc kubenswrapper[4593]: I0129 11:17:59.301959 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" event={"ID":"7aadd015-f714-41cf-b532-396d9f5f3946","Type":"ContainerStarted","Data":"f371f618c4302fbf0bf3244208980a3b33a4e263434fd709be03f076a3036627"} Jan 29 11:17:59 crc kubenswrapper[4593]: I0129 11:17:59.302313 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="10756552-28da-4e84-9c43-fb2be288e81f" containerName="probe" containerID="cri-o://24897273abec623fff6c526f0b856b7cfaaa9ed18d3e576b618b0daab55ab047" gracePeriod=30 Jan 29 11:18:00 crc kubenswrapper[4593]: I0129 11:18:00.043506 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 11:18:00 crc kubenswrapper[4593]: I0129 11:18:00.043992 4593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:18:00 crc kubenswrapper[4593]: I0129 11:18:00.051365 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 11:18:00 crc kubenswrapper[4593]: I0129 11:18:00.267348 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 11:18:00 crc kubenswrapper[4593]: I0129 11:18:00.345574 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" event={"ID":"7aadd015-f714-41cf-b532-396d9f5f3946","Type":"ContainerStarted","Data":"71929b9f4271d72dbfcb871f40c2a2b36bba6325c1864b1f8ec830759d7bd059"} Jan 29 11:18:00 crc kubenswrapper[4593]: I0129 11:18:00.345663 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:18:00 crc kubenswrapper[4593]: I0129 11:18:00.373558 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dc77db4b8-s2bq6" event={"ID":"df8e6616-b9af-427f-9daa-d62ee3cb24d3","Type":"ContainerStarted","Data":"ea1f5b0da7cda5576a556da562bab910500bb22fc10f44670339b87aed033fff"} Jan 29 11:18:00 crc kubenswrapper[4593]: I0129 11:18:00.373800 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:18:00 crc kubenswrapper[4593]: I0129 11:18:00.373893 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dc77db4b8-s2bq6" event={"ID":"df8e6616-b9af-427f-9daa-d62ee3cb24d3","Type":"ContainerStarted","Data":"09e3428cd83e854d7603f9f23c1fc803bfbc3479156a4044437b5fa34689606a"} Jan 29 11:18:00 crc kubenswrapper[4593]: I0129 11:18:00.400779 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" podStartSLOduration=3.400754738 podStartE2EDuration="3.400754738s" podCreationTimestamp="2026-01-29 11:17:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:18:00.376614374 +0000 UTC m=+1146.249648565" watchObservedRunningTime="2026-01-29 11:18:00.400754738 +0000 UTC m=+1146.273788929" Jan 29 11:18:00 crc kubenswrapper[4593]: I0129 11:18:00.431527 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5dc77db4b8-s2bq6" podStartSLOduration=3.431499029 podStartE2EDuration="3.431499029s" podCreationTimestamp="2026-01-29 11:17:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:18:00.408862665 +0000 UTC m=+1146.281896856" watchObservedRunningTime="2026-01-29 11:18:00.431499029 +0000 UTC m=+1146.304533220" Jan 29 11:18:01 crc kubenswrapper[4593]: I0129 11:18:01.294060 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 11:18:01 crc kubenswrapper[4593]: I0129 11:18:01.433371 4593 generic.go:334] "Generic (PLEG): container finished" podID="10756552-28da-4e84-9c43-fb2be288e81f" containerID="24897273abec623fff6c526f0b856b7cfaaa9ed18d3e576b618b0daab55ab047" exitCode=0 Jan 29 11:18:01 crc kubenswrapper[4593]: I0129 11:18:01.433408 4593 generic.go:334] "Generic (PLEG): container finished" podID="10756552-28da-4e84-9c43-fb2be288e81f" containerID="49c7f116f6b968b8e92002d04be3944f190deaba5cfb0c87a84ff79e7f77d0cb" exitCode=0 Jan 29 11:18:01 crc kubenswrapper[4593]: I0129 11:18:01.434394 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"10756552-28da-4e84-9c43-fb2be288e81f","Type":"ContainerDied","Data":"24897273abec623fff6c526f0b856b7cfaaa9ed18d3e576b618b0daab55ab047"} Jan 29 11:18:01 crc kubenswrapper[4593]: I0129 11:18:01.434431 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"10756552-28da-4e84-9c43-fb2be288e81f","Type":"ContainerDied","Data":"49c7f116f6b968b8e92002d04be3944f190deaba5cfb0c87a84ff79e7f77d0cb"} Jan 29 11:18:01 crc kubenswrapper[4593]: I0129 11:18:01.974727 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-84867bd7b9-4vrb9"] Jan 29 11:18:01 crc kubenswrapper[4593]: I0129 11:18:01.976460 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:01 crc kubenswrapper[4593]: I0129 11:18:01.981069 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 29 11:18:01 crc kubenswrapper[4593]: I0129 11:18:01.981386 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.012411 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84867bd7b9-4vrb9"] Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.071546 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.124403 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-scripts\") pod \"10756552-28da-4e84-9c43-fb2be288e81f\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.124831 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10756552-28da-4e84-9c43-fb2be288e81f-etc-machine-id\") pod \"10756552-28da-4e84-9c43-fb2be288e81f\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.124919 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10756552-28da-4e84-9c43-fb2be288e81f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "10756552-28da-4e84-9c43-fb2be288e81f" (UID: "10756552-28da-4e84-9c43-fb2be288e81f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.124969 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smgc5\" (UniqueName: \"kubernetes.io/projected/10756552-28da-4e84-9c43-fb2be288e81f-kube-api-access-smgc5\") pod \"10756552-28da-4e84-9c43-fb2be288e81f\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.125151 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-config-data\") pod \"10756552-28da-4e84-9c43-fb2be288e81f\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.125218 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-config-data-custom\") pod \"10756552-28da-4e84-9c43-fb2be288e81f\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.125333 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-combined-ca-bundle\") pod \"10756552-28da-4e84-9c43-fb2be288e81f\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.125697 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-combined-ca-bundle\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.125756 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s6qt\" (UniqueName: \"kubernetes.io/projected/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-kube-api-access-9s6qt\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.125980 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-httpd-config\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.126026 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-public-tls-certs\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.126080 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-ovndb-tls-certs\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.126103 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-internal-tls-certs\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.126182 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-config\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.126312 4593 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10756552-28da-4e84-9c43-fb2be288e81f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.150869 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-scripts" (OuterVolumeSpecName: "scripts") pod "10756552-28da-4e84-9c43-fb2be288e81f" (UID: "10756552-28da-4e84-9c43-fb2be288e81f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.162419 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "10756552-28da-4e84-9c43-fb2be288e81f" (UID: "10756552-28da-4e84-9c43-fb2be288e81f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.173141 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10756552-28da-4e84-9c43-fb2be288e81f-kube-api-access-smgc5" (OuterVolumeSpecName: "kube-api-access-smgc5") pod "10756552-28da-4e84-9c43-fb2be288e81f" (UID: "10756552-28da-4e84-9c43-fb2be288e81f"). InnerVolumeSpecName "kube-api-access-smgc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.234584 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-config\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.234718 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-combined-ca-bundle\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.234747 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s6qt\" (UniqueName: \"kubernetes.io/projected/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-kube-api-access-9s6qt\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.234804 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-httpd-config\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.234832 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-public-tls-certs\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.234865 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-ovndb-tls-certs\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.234884 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-internal-tls-certs\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.234958 4593 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.234969 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.234980 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smgc5\" (UniqueName: \"kubernetes.io/projected/10756552-28da-4e84-9c43-fb2be288e81f-kube-api-access-smgc5\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.246929 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-internal-tls-certs\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.260145 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-combined-ca-bundle\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.269763 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-config\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.277512 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-public-tls-certs\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.277515 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-ovndb-tls-certs\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.286168 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-httpd-config\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.318529 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s6qt\" (UniqueName: \"kubernetes.io/projected/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-kube-api-access-9s6qt\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.348027 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10756552-28da-4e84-9c43-fb2be288e81f" (UID: "10756552-28da-4e84-9c43-fb2be288e81f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.380457 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.445683 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.467001 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"10756552-28da-4e84-9c43-fb2be288e81f","Type":"ContainerDied","Data":"966232a0b0262262a982b33e0fb01619e0942fc49fb0be06397f90be642babf0"} Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.467051 4593 scope.go:117] "RemoveContainer" containerID="24897273abec623fff6c526f0b856b7cfaaa9ed18d3e576b618b0daab55ab047" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.467175 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.479368 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k4l8n_9194cbfb-27b9-47e8-90eb-64b9391d0b07/registry-server/0.log" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.495839 4593 generic.go:334] "Generic (PLEG): container finished" podID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerID="392c83c8b20810b83ec9a5ece7d4422790dc84f02f822abe01aa473a1c9a74d9" exitCode=137 Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.496527 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4l8n" event={"ID":"9194cbfb-27b9-47e8-90eb-64b9391d0b07","Type":"ContainerDied","Data":"392c83c8b20810b83ec9a5ece7d4422790dc84f02f822abe01aa473a1c9a74d9"} Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.504866 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-config-data" (OuterVolumeSpecName: "config-data") pod "10756552-28da-4e84-9c43-fb2be288e81f" (UID: "10756552-28da-4e84-9c43-fb2be288e81f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.524924 4593 scope.go:117] "RemoveContainer" containerID="49c7f116f6b968b8e92002d04be3944f190deaba5cfb0c87a84ff79e7f77d0cb" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.547817 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.844024 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.868767 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.907032 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 11:18:02 crc kubenswrapper[4593]: E0129 11:18:02.907404 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10756552-28da-4e84-9c43-fb2be288e81f" containerName="probe" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.907422 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="10756552-28da-4e84-9c43-fb2be288e81f" containerName="probe" Jan 29 11:18:02 crc kubenswrapper[4593]: E0129 11:18:02.907442 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10756552-28da-4e84-9c43-fb2be288e81f" containerName="cinder-scheduler" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.907448 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="10756552-28da-4e84-9c43-fb2be288e81f" containerName="cinder-scheduler" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.907617 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="10756552-28da-4e84-9c43-fb2be288e81f" containerName="cinder-scheduler" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.907662 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="10756552-28da-4e84-9c43-fb2be288e81f" containerName="probe" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.908528 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.914194 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.930139 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.060863 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5516e5e9-a6e4-4877-bd34-af4128cc7e33-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.060938 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5516e5e9-a6e4-4877-bd34-af4128cc7e33-scripts\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.061012 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5516e5e9-a6e4-4877-bd34-af4128cc7e33-config-data\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.061064 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5516e5e9-a6e4-4877-bd34-af4128cc7e33-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.061135 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5516e5e9-a6e4-4877-bd34-af4128cc7e33-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.061200 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjfmp\" (UniqueName: \"kubernetes.io/projected/5516e5e9-a6e4-4877-bd34-af4128cc7e33-kube-api-access-hjfmp\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.103085 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10756552-28da-4e84-9c43-fb2be288e81f" path="/var/lib/kubelet/pods/10756552-28da-4e84-9c43-fb2be288e81f/volumes" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.163201 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5516e5e9-a6e4-4877-bd34-af4128cc7e33-config-data\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.163278 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5516e5e9-a6e4-4877-bd34-af4128cc7e33-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.163349 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5516e5e9-a6e4-4877-bd34-af4128cc7e33-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.163398 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjfmp\" (UniqueName: \"kubernetes.io/projected/5516e5e9-a6e4-4877-bd34-af4128cc7e33-kube-api-access-hjfmp\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.163481 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5516e5e9-a6e4-4877-bd34-af4128cc7e33-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.163517 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5516e5e9-a6e4-4877-bd34-af4128cc7e33-scripts\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.169615 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5516e5e9-a6e4-4877-bd34-af4128cc7e33-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.169727 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5516e5e9-a6e4-4877-bd34-af4128cc7e33-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.170280 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5516e5e9-a6e4-4877-bd34-af4128cc7e33-scripts\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.176341 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5516e5e9-a6e4-4877-bd34-af4128cc7e33-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.195181 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5516e5e9-a6e4-4877-bd34-af4128cc7e33-config-data\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.195985 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjfmp\" (UniqueName: \"kubernetes.io/projected/5516e5e9-a6e4-4877-bd34-af4128cc7e33-kube-api-access-hjfmp\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.326581 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.345208 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84867bd7b9-4vrb9"] Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.433859 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="95847704-1027-4518-9f5c-cd663496b804" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.163:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.546422 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84867bd7b9-4vrb9" event={"ID":"174d0d16-4c6e-403a-bf10-0a69b4e98fb1","Type":"ContainerStarted","Data":"abb0936fdc501c6fd66d807c5b1109e1663f7b99c3b19651569fcf3b3fd0d74b"} Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.557340 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k4l8n_9194cbfb-27b9-47e8-90eb-64b9391d0b07/registry-server/0.log" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.561864 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4l8n" event={"ID":"9194cbfb-27b9-47e8-90eb-64b9391d0b07","Type":"ContainerStarted","Data":"01cfca19ba6e7095e676495e45dbce66f5a74d2b87eaab4f83bc77de55811a95"} Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.926524 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 11:18:04 crc kubenswrapper[4593]: I0129 11:18:04.580271 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84867bd7b9-4vrb9" event={"ID":"174d0d16-4c6e-403a-bf10-0a69b4e98fb1","Type":"ContainerStarted","Data":"2acb4fa35d4afa0e84525e5f6be668bf1ac762b1e2bd13f1644f9ec69cb6cf3d"} Jan 29 11:18:04 crc kubenswrapper[4593]: I0129 11:18:04.581228 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84867bd7b9-4vrb9" event={"ID":"174d0d16-4c6e-403a-bf10-0a69b4e98fb1","Type":"ContainerStarted","Data":"4f058cbdce9737012ff485ff8ec301e5a9e74f34b759b32bb8eae25cca8f5acc"} Jan 29 11:18:04 crc kubenswrapper[4593]: I0129 11:18:04.581899 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:04 crc kubenswrapper[4593]: I0129 11:18:04.584827 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5516e5e9-a6e4-4877-bd34-af4128cc7e33","Type":"ContainerStarted","Data":"3168e5f76687d9beb56498941ffa703bdf21d9536851728a69bc369fa9efead7"} Jan 29 11:18:04 crc kubenswrapper[4593]: I0129 11:18:04.615538 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-84867bd7b9-4vrb9" podStartSLOduration=3.615514932 podStartE2EDuration="3.615514932s" podCreationTimestamp="2026-01-29 11:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:18:04.609077079 +0000 UTC m=+1150.482111270" watchObservedRunningTime="2026-01-29 11:18:04.615514932 +0000 UTC m=+1150.488549123" Jan 29 11:18:05 crc kubenswrapper[4593]: I0129 11:18:05.622469 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5516e5e9-a6e4-4877-bd34-af4128cc7e33","Type":"ContainerStarted","Data":"351b4877f3dbb97ff5c9c41efa352d54dba91cf00802c322b48a40cd15d9e957"} Jan 29 11:18:06 crc kubenswrapper[4593]: I0129 11:18:06.637099 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5516e5e9-a6e4-4877-bd34-af4128cc7e33","Type":"ContainerStarted","Data":"6312f89bb42170d2ee932fb1e176e775bca45bca9b1af753eb54b2a689086c06"} Jan 29 11:18:06 crc kubenswrapper[4593]: I0129 11:18:06.660721 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.660700719 podStartE2EDuration="4.660700719s" podCreationTimestamp="2026-01-29 11:18:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:18:06.653223079 +0000 UTC m=+1152.526257270" watchObservedRunningTime="2026-01-29 11:18:06.660700719 +0000 UTC m=+1152.533734910" Jan 29 11:18:07 crc kubenswrapper[4593]: I0129 11:18:07.865816 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:18:07 crc kubenswrapper[4593]: I0129 11:18:07.950284 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb"] Jan 29 11:18:07 crc kubenswrapper[4593]: I0129 11:18:07.950778 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" podUID="cad93c02-cde3-4a50-9f89-1800d0436d2d" containerName="dnsmasq-dns" containerID="cri-o://a493fd10106184253e493388b4dfa71c635ecf5329b1a15c3ccde9fe523d1e73" gracePeriod=10 Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.326920 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.480267 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="95847704-1027-4518-9f5c-cd663496b804" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.163:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.545166 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.571314 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.609313 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-ovsdbserver-sb\") pod \"cad93c02-cde3-4a50-9f89-1800d0436d2d\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.609358 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwb96\" (UniqueName: \"kubernetes.io/projected/cad93c02-cde3-4a50-9f89-1800d0436d2d-kube-api-access-cwb96\") pod \"cad93c02-cde3-4a50-9f89-1800d0436d2d\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.609378 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-dns-svc\") pod \"cad93c02-cde3-4a50-9f89-1800d0436d2d\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.610468 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-dns-swift-storage-0\") pod \"cad93c02-cde3-4a50-9f89-1800d0436d2d\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.610784 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-ovsdbserver-nb\") pod \"cad93c02-cde3-4a50-9f89-1800d0436d2d\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.610816 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-config\") pod \"cad93c02-cde3-4a50-9f89-1800d0436d2d\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.669886 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cad93c02-cde3-4a50-9f89-1800d0436d2d-kube-api-access-cwb96" (OuterVolumeSpecName: "kube-api-access-cwb96") pod "cad93c02-cde3-4a50-9f89-1800d0436d2d" (UID: "cad93c02-cde3-4a50-9f89-1800d0436d2d"). InnerVolumeSpecName "kube-api-access-cwb96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.708046 4593 generic.go:334] "Generic (PLEG): container finished" podID="cad93c02-cde3-4a50-9f89-1800d0436d2d" containerID="a493fd10106184253e493388b4dfa71c635ecf5329b1a15c3ccde9fe523d1e73" exitCode=0 Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.708827 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.708889 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" event={"ID":"cad93c02-cde3-4a50-9f89-1800d0436d2d","Type":"ContainerDied","Data":"a493fd10106184253e493388b4dfa71c635ecf5329b1a15c3ccde9fe523d1e73"} Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.708914 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" event={"ID":"cad93c02-cde3-4a50-9f89-1800d0436d2d","Type":"ContainerDied","Data":"564ff28580e51f15a586a4b36ebebac1a1de37d8a71b76aea863a2b018150e6b"} Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.708931 4593 scope.go:117] "RemoveContainer" containerID="a493fd10106184253e493388b4dfa71c635ecf5329b1a15c3ccde9fe523d1e73" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.717714 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwb96\" (UniqueName: \"kubernetes.io/projected/cad93c02-cde3-4a50-9f89-1800d0436d2d-kube-api-access-cwb96\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.767330 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cad93c02-cde3-4a50-9f89-1800d0436d2d" (UID: "cad93c02-cde3-4a50-9f89-1800d0436d2d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.807953 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cad93c02-cde3-4a50-9f89-1800d0436d2d" (UID: "cad93c02-cde3-4a50-9f89-1800d0436d2d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.808177 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-config" (OuterVolumeSpecName: "config") pod "cad93c02-cde3-4a50-9f89-1800d0436d2d" (UID: "cad93c02-cde3-4a50-9f89-1800d0436d2d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.818811 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cad93c02-cde3-4a50-9f89-1800d0436d2d" (UID: "cad93c02-cde3-4a50-9f89-1800d0436d2d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.820040 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.820058 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.820069 4593 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.820078 4593 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.832533 4593 scope.go:117] "RemoveContainer" containerID="b5db7de407f29070d58723bcbb491e8220b21b0f76aba938e6b5ac7b8b233fc5" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.859068 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cad93c02-cde3-4a50-9f89-1800d0436d2d" (UID: "cad93c02-cde3-4a50-9f89-1800d0436d2d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.881731 4593 scope.go:117] "RemoveContainer" containerID="a493fd10106184253e493388b4dfa71c635ecf5329b1a15c3ccde9fe523d1e73" Jan 29 11:18:08 crc kubenswrapper[4593]: E0129 11:18:08.884999 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a493fd10106184253e493388b4dfa71c635ecf5329b1a15c3ccde9fe523d1e73\": container with ID starting with a493fd10106184253e493388b4dfa71c635ecf5329b1a15c3ccde9fe523d1e73 not found: ID does not exist" containerID="a493fd10106184253e493388b4dfa71c635ecf5329b1a15c3ccde9fe523d1e73" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.885195 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a493fd10106184253e493388b4dfa71c635ecf5329b1a15c3ccde9fe523d1e73"} err="failed to get container status \"a493fd10106184253e493388b4dfa71c635ecf5329b1a15c3ccde9fe523d1e73\": rpc error: code = NotFound desc = could not find container \"a493fd10106184253e493388b4dfa71c635ecf5329b1a15c3ccde9fe523d1e73\": container with ID starting with a493fd10106184253e493388b4dfa71c635ecf5329b1a15c3ccde9fe523d1e73 not found: ID does not exist" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.885358 4593 scope.go:117] "RemoveContainer" containerID="b5db7de407f29070d58723bcbb491e8220b21b0f76aba938e6b5ac7b8b233fc5" Jan 29 11:18:08 crc kubenswrapper[4593]: E0129 11:18:08.886938 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5db7de407f29070d58723bcbb491e8220b21b0f76aba938e6b5ac7b8b233fc5\": container with ID starting with b5db7de407f29070d58723bcbb491e8220b21b0f76aba938e6b5ac7b8b233fc5 not found: ID does not exist" containerID="b5db7de407f29070d58723bcbb491e8220b21b0f76aba938e6b5ac7b8b233fc5" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.886991 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5db7de407f29070d58723bcbb491e8220b21b0f76aba938e6b5ac7b8b233fc5"} err="failed to get container status \"b5db7de407f29070d58723bcbb491e8220b21b0f76aba938e6b5ac7b8b233fc5\": rpc error: code = NotFound desc = could not find container \"b5db7de407f29070d58723bcbb491e8220b21b0f76aba938e6b5ac7b8b233fc5\": container with ID starting with b5db7de407f29070d58723bcbb491e8220b21b0f76aba938e6b5ac7b8b233fc5 not found: ID does not exist" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.921309 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:09 crc kubenswrapper[4593]: I0129 11:18:09.049705 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb"] Jan 29 11:18:09 crc kubenswrapper[4593]: I0129 11:18:09.056685 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb"] Jan 29 11:18:09 crc kubenswrapper[4593]: I0129 11:18:09.091612 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cad93c02-cde3-4a50-9f89-1800d0436d2d" path="/var/lib/kubelet/pods/cad93c02-cde3-4a50-9f89-1800d0436d2d/volumes" Jan 29 11:18:10 crc kubenswrapper[4593]: I0129 11:18:10.053828 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:18:10 crc kubenswrapper[4593]: I0129 11:18:10.053869 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:18:11 crc kubenswrapper[4593]: I0129 11:18:11.104599 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:18:11 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:18:11 crc kubenswrapper[4593]: > Jan 29 11:18:11 crc kubenswrapper[4593]: I0129 11:18:11.741303 4593 generic.go:334] "Generic (PLEG): container finished" podID="95847704-1027-4518-9f5c-cd663496b804" containerID="532ef2b08300e953556c4f80a0efbeeef65f13a2c78db2506158a85df92e08ac" exitCode=137 Jan 29 11:18:11 crc kubenswrapper[4593]: I0129 11:18:11.741533 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"95847704-1027-4518-9f5c-cd663496b804","Type":"ContainerDied","Data":"532ef2b08300e953556c4f80a0efbeeef65f13a2c78db2506158a85df92e08ac"} Jan 29 11:18:12 crc kubenswrapper[4593]: I0129 11:18:12.955332 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:18:12 crc kubenswrapper[4593]: I0129 11:18:12.956010 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerName="sg-core" containerID="cri-o://b0abb69f5e56bccd2bb62baeb61fd064ee7010eb36ba3b37edb2c69864a733d7" gracePeriod=30 Jan 29 11:18:12 crc kubenswrapper[4593]: I0129 11:18:12.956017 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerName="ceilometer-notification-agent" containerID="cri-o://aba14bdcb819b3097f623b10d1f889520b4a3ec8b94a23129679074b0158bb26" gracePeriod=30 Jan 29 11:18:12 crc kubenswrapper[4593]: I0129 11:18:12.956198 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerName="proxy-httpd" containerID="cri-o://88b868d7da96b6b3e10186188d5bbc939be24d322cd5116219ae0adb17dbd928" gracePeriod=30 Jan 29 11:18:12 crc kubenswrapper[4593]: I0129 11:18:12.956272 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerName="ceilometer-central-agent" containerID="cri-o://a9f1fe703de62c9906cf5414628cb1871967b692dd15c7ec296d4900c7151a67" gracePeriod=30 Jan 29 11:18:13 crc kubenswrapper[4593]: I0129 11:18:13.771191 4593 generic.go:334] "Generic (PLEG): container finished" podID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerID="88b868d7da96b6b3e10186188d5bbc939be24d322cd5116219ae0adb17dbd928" exitCode=0 Jan 29 11:18:13 crc kubenswrapper[4593]: I0129 11:18:13.771222 4593 generic.go:334] "Generic (PLEG): container finished" podID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerID="b0abb69f5e56bccd2bb62baeb61fd064ee7010eb36ba3b37edb2c69864a733d7" exitCode=2 Jan 29 11:18:13 crc kubenswrapper[4593]: I0129 11:18:13.771232 4593 generic.go:334] "Generic (PLEG): container finished" podID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerID="aba14bdcb819b3097f623b10d1f889520b4a3ec8b94a23129679074b0158bb26" exitCode=0 Jan 29 11:18:13 crc kubenswrapper[4593]: I0129 11:18:13.771241 4593 generic.go:334] "Generic (PLEG): container finished" podID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerID="a9f1fe703de62c9906cf5414628cb1871967b692dd15c7ec296d4900c7151a67" exitCode=0 Jan 29 11:18:13 crc kubenswrapper[4593]: I0129 11:18:13.771260 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"852a4805-5ddc-4a1d-a642-9d5e6bbb9206","Type":"ContainerDied","Data":"88b868d7da96b6b3e10186188d5bbc939be24d322cd5116219ae0adb17dbd928"} Jan 29 11:18:13 crc kubenswrapper[4593]: I0129 11:18:13.771286 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"852a4805-5ddc-4a1d-a642-9d5e6bbb9206","Type":"ContainerDied","Data":"b0abb69f5e56bccd2bb62baeb61fd064ee7010eb36ba3b37edb2c69864a733d7"} Jan 29 11:18:13 crc kubenswrapper[4593]: I0129 11:18:13.771295 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"852a4805-5ddc-4a1d-a642-9d5e6bbb9206","Type":"ContainerDied","Data":"aba14bdcb819b3097f623b10d1f889520b4a3ec8b94a23129679074b0158bb26"} Jan 29 11:18:13 crc kubenswrapper[4593]: I0129 11:18:13.771303 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"852a4805-5ddc-4a1d-a642-9d5e6bbb9206","Type":"ContainerDied","Data":"a9f1fe703de62c9906cf5414628cb1871967b692dd15c7ec296d4900c7151a67"} Jan 29 11:18:13 crc kubenswrapper[4593]: I0129 11:18:13.791766 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 11:18:14 crc kubenswrapper[4593]: I0129 11:18:14.771458 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:18:14 crc kubenswrapper[4593]: I0129 11:18:14.772008 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="911edffc-f4d0-40bf-b49c-c1ab592dd258" containerName="glance-log" containerID="cri-o://d9d7dd8976380d6486fd1b5f21789a9b38a5817e8ac2103c8d17ab8df8f5fe64" gracePeriod=30 Jan 29 11:18:14 crc kubenswrapper[4593]: I0129 11:18:14.772282 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="911edffc-f4d0-40bf-b49c-c1ab592dd258" containerName="glance-httpd" containerID="cri-o://964d34df183e187ec805f4ff554355a6b6ef2fc5d1f44b5ea4d74d26a5c58cdc" gracePeriod=30 Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.692428 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-58d6d94967-wdzcg"] Jan 29 11:18:15 crc kubenswrapper[4593]: E0129 11:18:15.692911 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cad93c02-cde3-4a50-9f89-1800d0436d2d" containerName="init" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.692932 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="cad93c02-cde3-4a50-9f89-1800d0436d2d" containerName="init" Jan 29 11:18:15 crc kubenswrapper[4593]: E0129 11:18:15.692961 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cad93c02-cde3-4a50-9f89-1800d0436d2d" containerName="dnsmasq-dns" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.692971 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="cad93c02-cde3-4a50-9f89-1800d0436d2d" containerName="dnsmasq-dns" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.693199 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="cad93c02-cde3-4a50-9f89-1800d0436d2d" containerName="dnsmasq-dns" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.695909 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.698767 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.699292 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.699500 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.719699 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-58d6d94967-wdzcg"] Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.761526 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1bc6621-0892-452c-9f95-54554f8c6e68-combined-ca-bundle\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.761595 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1bc6621-0892-452c-9f95-54554f8c6e68-log-httpd\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.761721 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6624\" (UniqueName: \"kubernetes.io/projected/f1bc6621-0892-452c-9f95-54554f8c6e68-kube-api-access-x6624\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.761792 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1bc6621-0892-452c-9f95-54554f8c6e68-run-httpd\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.761858 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1bc6621-0892-452c-9f95-54554f8c6e68-internal-tls-certs\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.761940 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1bc6621-0892-452c-9f95-54554f8c6e68-public-tls-certs\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.761994 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1bc6621-0892-452c-9f95-54554f8c6e68-config-data\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.762130 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f1bc6621-0892-452c-9f95-54554f8c6e68-etc-swift\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.803856 4593 generic.go:334] "Generic (PLEG): container finished" podID="911edffc-f4d0-40bf-b49c-c1ab592dd258" containerID="d9d7dd8976380d6486fd1b5f21789a9b38a5817e8ac2103c8d17ab8df8f5fe64" exitCode=143 Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.803944 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"911edffc-f4d0-40bf-b49c-c1ab592dd258","Type":"ContainerDied","Data":"d9d7dd8976380d6486fd1b5f21789a9b38a5817e8ac2103c8d17ab8df8f5fe64"} Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.810262 4593 generic.go:334] "Generic (PLEG): container finished" podID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerID="a15a1a862b6057b76f95edeb2bb41d937e5e017b829f9f7c6c63b71068d74996" exitCode=137 Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.810386 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf566cdb-kbm9z" event={"ID":"b9761a4f-8669-4e74-9f8e-ed8b9778af11","Type":"ContainerDied","Data":"a15a1a862b6057b76f95edeb2bb41d937e5e017b829f9f7c6c63b71068d74996"} Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.827270 4593 generic.go:334] "Generic (PLEG): container finished" podID="be4a01cd-2eb7-48e8-8a7e-eb02f8851188" containerID="948ff5eda4c7a4e3a5023888e59c0f30a788f7ad09bc8aba86ab19e010a4eeb1" exitCode=137 Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.827331 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bdffb4784-5zp8q" event={"ID":"be4a01cd-2eb7-48e8-8a7e-eb02f8851188","Type":"ContainerDied","Data":"948ff5eda4c7a4e3a5023888e59c0f30a788f7ad09bc8aba86ab19e010a4eeb1"} Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.863617 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1bc6621-0892-452c-9f95-54554f8c6e68-config-data\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.863689 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f1bc6621-0892-452c-9f95-54554f8c6e68-etc-swift\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.865273 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1bc6621-0892-452c-9f95-54554f8c6e68-combined-ca-bundle\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.865324 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1bc6621-0892-452c-9f95-54554f8c6e68-log-httpd\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.865409 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6624\" (UniqueName: \"kubernetes.io/projected/f1bc6621-0892-452c-9f95-54554f8c6e68-kube-api-access-x6624\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.865451 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1bc6621-0892-452c-9f95-54554f8c6e68-run-httpd\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.865511 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1bc6621-0892-452c-9f95-54554f8c6e68-internal-tls-certs\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.865577 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1bc6621-0892-452c-9f95-54554f8c6e68-public-tls-certs\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.867344 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1bc6621-0892-452c-9f95-54554f8c6e68-run-httpd\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.867994 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1bc6621-0892-452c-9f95-54554f8c6e68-log-httpd\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.872382 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1bc6621-0892-452c-9f95-54554f8c6e68-combined-ca-bundle\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.873213 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f1bc6621-0892-452c-9f95-54554f8c6e68-etc-swift\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.873672 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1bc6621-0892-452c-9f95-54554f8c6e68-internal-tls-certs\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.874059 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1bc6621-0892-452c-9f95-54554f8c6e68-config-data\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.874781 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1bc6621-0892-452c-9f95-54554f8c6e68-public-tls-certs\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.892511 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6624\" (UniqueName: \"kubernetes.io/projected/f1bc6621-0892-452c-9f95-54554f8c6e68-kube-api-access-x6624\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:16 crc kubenswrapper[4593]: I0129 11:18:16.020058 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:18 crc kubenswrapper[4593]: I0129 11:18:18.310009 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="95847704-1027-4518-9f5c-cd663496b804" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.163:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:18:18 crc kubenswrapper[4593]: I0129 11:18:18.469848 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:18:18 crc kubenswrapper[4593]: I0129 11:18:18.470162 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7289daaa-acda-4854-a506-c6cc429562d3" containerName="glance-log" containerID="cri-o://90fb85235bc3606a7b4bb84b4b179cef3fafc0ce2eb0f3b29c3cc2eb08fb78b3" gracePeriod=30 Jan 29 11:18:18 crc kubenswrapper[4593]: I0129 11:18:18.470277 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7289daaa-acda-4854-a506-c6cc429562d3" containerName="glance-httpd" containerID="cri-o://3293c2e1edd54e8ff7f4dc2cefd7cf058a429e32cd917bd68da12dc400ead3f5" gracePeriod=30 Jan 29 11:18:18 crc kubenswrapper[4593]: I0129 11:18:18.901217 4593 generic.go:334] "Generic (PLEG): container finished" podID="911edffc-f4d0-40bf-b49c-c1ab592dd258" containerID="964d34df183e187ec805f4ff554355a6b6ef2fc5d1f44b5ea4d74d26a5c58cdc" exitCode=0 Jan 29 11:18:18 crc kubenswrapper[4593]: I0129 11:18:18.901286 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"911edffc-f4d0-40bf-b49c-c1ab592dd258","Type":"ContainerDied","Data":"964d34df183e187ec805f4ff554355a6b6ef2fc5d1f44b5ea4d74d26a5c58cdc"} Jan 29 11:18:18 crc kubenswrapper[4593]: I0129 11:18:18.905405 4593 generic.go:334] "Generic (PLEG): container finished" podID="7289daaa-acda-4854-a506-c6cc429562d3" containerID="90fb85235bc3606a7b4bb84b4b179cef3fafc0ce2eb0f3b29c3cc2eb08fb78b3" exitCode=143 Jan 29 11:18:18 crc kubenswrapper[4593]: I0129 11:18:18.905434 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7289daaa-acda-4854-a506-c6cc429562d3","Type":"ContainerDied","Data":"90fb85235bc3606a7b4bb84b4b179cef3fafc0ce2eb0f3b29c3cc2eb08fb78b3"} Jan 29 11:18:20 crc kubenswrapper[4593]: E0129 11:18:20.564236 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Jan 29 11:18:20 crc kubenswrapper[4593]: E0129 11:18:20.564749 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchcdhb4h5bch5dfh66bh54fhb5hc9h5f4h5b8h5h665h69h74h68ch5f6hb6h546h79h76h5c9h6ch68ch89hf4h4h4h76h9h58bh65q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgpjd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(220bdfcb-98c4-4c78-8d95-ea64edfaf1ab): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:18:20 crc kubenswrapper[4593]: E0129 11:18:20.565984 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="220bdfcb-98c4-4c78-8d95-ea64edfaf1ab" Jan 29 11:18:20 crc kubenswrapper[4593]: I0129 11:18:20.988259 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"95847704-1027-4518-9f5c-cd663496b804","Type":"ContainerDied","Data":"5819a6ffae38a266d2b0e8c7f0f4a9a9ec8806aff42d69e8d72319628c862e12"} Jan 29 11:18:20 crc kubenswrapper[4593]: I0129 11:18:20.988492 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5819a6ffae38a266d2b0e8c7f0f4a9a9ec8806aff42d69e8d72319628c862e12" Jan 29 11:18:20 crc kubenswrapper[4593]: E0129 11:18:20.992858 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="220bdfcb-98c4-4c78-8d95-ea64edfaf1ab" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.051979 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.138477 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:18:21 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:18:21 crc kubenswrapper[4593]: > Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.174058 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95847704-1027-4518-9f5c-cd663496b804-logs\") pod \"95847704-1027-4518-9f5c-cd663496b804\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.174134 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-scripts\") pod \"95847704-1027-4518-9f5c-cd663496b804\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.174160 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-config-data\") pod \"95847704-1027-4518-9f5c-cd663496b804\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.174199 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-config-data-custom\") pod \"95847704-1027-4518-9f5c-cd663496b804\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.174296 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg67x\" (UniqueName: \"kubernetes.io/projected/95847704-1027-4518-9f5c-cd663496b804-kube-api-access-sg67x\") pod \"95847704-1027-4518-9f5c-cd663496b804\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.174319 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/95847704-1027-4518-9f5c-cd663496b804-etc-machine-id\") pod \"95847704-1027-4518-9f5c-cd663496b804\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.174389 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-combined-ca-bundle\") pod \"95847704-1027-4518-9f5c-cd663496b804\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.177820 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95847704-1027-4518-9f5c-cd663496b804-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "95847704-1027-4518-9f5c-cd663496b804" (UID: "95847704-1027-4518-9f5c-cd663496b804"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.181241 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95847704-1027-4518-9f5c-cd663496b804-logs" (OuterVolumeSpecName: "logs") pod "95847704-1027-4518-9f5c-cd663496b804" (UID: "95847704-1027-4518-9f5c-cd663496b804"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.184862 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-scripts" (OuterVolumeSpecName: "scripts") pod "95847704-1027-4518-9f5c-cd663496b804" (UID: "95847704-1027-4518-9f5c-cd663496b804"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.190821 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "95847704-1027-4518-9f5c-cd663496b804" (UID: "95847704-1027-4518-9f5c-cd663496b804"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.190898 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95847704-1027-4518-9f5c-cd663496b804-kube-api-access-sg67x" (OuterVolumeSpecName: "kube-api-access-sg67x") pod "95847704-1027-4518-9f5c-cd663496b804" (UID: "95847704-1027-4518-9f5c-cd663496b804"). InnerVolumeSpecName "kube-api-access-sg67x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.260426 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95847704-1027-4518-9f5c-cd663496b804" (UID: "95847704-1027-4518-9f5c-cd663496b804"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.280837 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.280873 4593 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.280884 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg67x\" (UniqueName: \"kubernetes.io/projected/95847704-1027-4518-9f5c-cd663496b804-kube-api-access-sg67x\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.280892 4593 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/95847704-1027-4518-9f5c-cd663496b804-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.280900 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.280908 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95847704-1027-4518-9f5c-cd663496b804-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.338382 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-config-data" (OuterVolumeSpecName: "config-data") pod "95847704-1027-4518-9f5c-cd663496b804" (UID: "95847704-1027-4518-9f5c-cd663496b804"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.373364 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.384392 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.485733 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-scripts\") pod \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.485840 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bdq8\" (UniqueName: \"kubernetes.io/projected/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-kube-api-access-6bdq8\") pod \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.485882 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-run-httpd\") pod \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.485926 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-sg-core-conf-yaml\") pod \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.485980 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-combined-ca-bundle\") pod \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.486029 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-config-data\") pod \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.486057 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-log-httpd\") pod \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.488422 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "852a4805-5ddc-4a1d-a642-9d5e6bbb9206" (UID: "852a4805-5ddc-4a1d-a642-9d5e6bbb9206"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.488912 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "852a4805-5ddc-4a1d-a642-9d5e6bbb9206" (UID: "852a4805-5ddc-4a1d-a642-9d5e6bbb9206"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.497490 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-scripts" (OuterVolumeSpecName: "scripts") pod "852a4805-5ddc-4a1d-a642-9d5e6bbb9206" (UID: "852a4805-5ddc-4a1d-a642-9d5e6bbb9206"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.509913 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-kube-api-access-6bdq8" (OuterVolumeSpecName: "kube-api-access-6bdq8") pod "852a4805-5ddc-4a1d-a642-9d5e6bbb9206" (UID: "852a4805-5ddc-4a1d-a642-9d5e6bbb9206"). InnerVolumeSpecName "kube-api-access-6bdq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.572937 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "852a4805-5ddc-4a1d-a642-9d5e6bbb9206" (UID: "852a4805-5ddc-4a1d-a642-9d5e6bbb9206"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.590866 4593 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.590900 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.590908 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bdq8\" (UniqueName: \"kubernetes.io/projected/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-kube-api-access-6bdq8\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.590919 4593 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.590928 4593 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.646918 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "852a4805-5ddc-4a1d-a642-9d5e6bbb9206" (UID: "852a4805-5ddc-4a1d-a642-9d5e6bbb9206"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.693881 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.731354 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.785797 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-config-data" (OuterVolumeSpecName: "config-data") pod "852a4805-5ddc-4a1d-a642-9d5e6bbb9206" (UID: "852a4805-5ddc-4a1d-a642-9d5e6bbb9206"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.794835 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-scripts\") pod \"911edffc-f4d0-40bf-b49c-c1ab592dd258\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.794919 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-internal-tls-certs\") pod \"911edffc-f4d0-40bf-b49c-c1ab592dd258\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.794985 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"911edffc-f4d0-40bf-b49c-c1ab592dd258\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.795010 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/911edffc-f4d0-40bf-b49c-c1ab592dd258-logs\") pod \"911edffc-f4d0-40bf-b49c-c1ab592dd258\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.795117 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-config-data\") pod \"911edffc-f4d0-40bf-b49c-c1ab592dd258\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.795194 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9nh2\" (UniqueName: \"kubernetes.io/projected/911edffc-f4d0-40bf-b49c-c1ab592dd258-kube-api-access-z9nh2\") pod \"911edffc-f4d0-40bf-b49c-c1ab592dd258\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.795284 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-combined-ca-bundle\") pod \"911edffc-f4d0-40bf-b49c-c1ab592dd258\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.795358 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/911edffc-f4d0-40bf-b49c-c1ab592dd258-httpd-run\") pod \"911edffc-f4d0-40bf-b49c-c1ab592dd258\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.795859 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.796474 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/911edffc-f4d0-40bf-b49c-c1ab592dd258-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "911edffc-f4d0-40bf-b49c-c1ab592dd258" (UID: "911edffc-f4d0-40bf-b49c-c1ab592dd258"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.797034 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/911edffc-f4d0-40bf-b49c-c1ab592dd258-logs" (OuterVolumeSpecName: "logs") pod "911edffc-f4d0-40bf-b49c-c1ab592dd258" (UID: "911edffc-f4d0-40bf-b49c-c1ab592dd258"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.809165 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-scripts" (OuterVolumeSpecName: "scripts") pod "911edffc-f4d0-40bf-b49c-c1ab592dd258" (UID: "911edffc-f4d0-40bf-b49c-c1ab592dd258"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.813561 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "911edffc-f4d0-40bf-b49c-c1ab592dd258" (UID: "911edffc-f4d0-40bf-b49c-c1ab592dd258"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.828970 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/911edffc-f4d0-40bf-b49c-c1ab592dd258-kube-api-access-z9nh2" (OuterVolumeSpecName: "kube-api-access-z9nh2") pod "911edffc-f4d0-40bf-b49c-c1ab592dd258" (UID: "911edffc-f4d0-40bf-b49c-c1ab592dd258"). InnerVolumeSpecName "kube-api-access-z9nh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.875037 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "911edffc-f4d0-40bf-b49c-c1ab592dd258" (UID: "911edffc-f4d0-40bf-b49c-c1ab592dd258"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.899795 4593 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/911edffc-f4d0-40bf-b49c-c1ab592dd258-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.900049 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.900145 4593 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.900218 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/911edffc-f4d0-40bf-b49c-c1ab592dd258-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.900283 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9nh2\" (UniqueName: \"kubernetes.io/projected/911edffc-f4d0-40bf-b49c-c1ab592dd258-kube-api-access-z9nh2\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.900347 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.912803 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "911edffc-f4d0-40bf-b49c-c1ab592dd258" (UID: "911edffc-f4d0-40bf-b49c-c1ab592dd258"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.929845 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-config-data" (OuterVolumeSpecName: "config-data") pod "911edffc-f4d0-40bf-b49c-c1ab592dd258" (UID: "911edffc-f4d0-40bf-b49c-c1ab592dd258"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.934264 4593 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.015378 4593 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.015599 4593 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.015713 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.094206 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf566cdb-kbm9z" event={"ID":"b9761a4f-8669-4e74-9f8e-ed8b9778af11","Type":"ContainerStarted","Data":"d530af95b0eed70c00fd912ebcf7a37fa3a57fbb18ac1239a4c7320a7f27c6af"} Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.114856 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"852a4805-5ddc-4a1d-a642-9d5e6bbb9206","Type":"ContainerDied","Data":"0eb50a3ac1f633cc99edb2df912ed9ee0643f4c8b02ce477d7d327cbda5af774"} Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.114919 4593 scope.go:117] "RemoveContainer" containerID="88b868d7da96b6b3e10186188d5bbc939be24d322cd5116219ae0adb17dbd928" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.115105 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.147678 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-58d6d94967-wdzcg"] Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.185212 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bdffb4784-5zp8q" event={"ID":"be4a01cd-2eb7-48e8-8a7e-eb02f8851188","Type":"ContainerStarted","Data":"b268f526e5a04b5381dd6c521b7785de6e18d74e1d8c1ba48d2b1ab6cb3e4972"} Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.215269 4593 scope.go:117] "RemoveContainer" containerID="b0abb69f5e56bccd2bb62baeb61fd064ee7010eb36ba3b37edb2c69864a733d7" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.215526 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.217779 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.218169 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.218229 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"911edffc-f4d0-40bf-b49c-c1ab592dd258","Type":"ContainerDied","Data":"4bb371c1c9d2fcc4f80bfb03ebb66d3dd6167a7190179617153d4df635eb3592"} Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.283721 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.313397 4593 scope.go:117] "RemoveContainer" containerID="aba14bdcb819b3097f623b10d1f889520b4a3ec8b94a23129679074b0158bb26" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.353796 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:18:22 crc kubenswrapper[4593]: E0129 11:18:22.354230 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerName="sg-core" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.354249 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerName="sg-core" Jan 29 11:18:22 crc kubenswrapper[4593]: E0129 11:18:22.354261 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerName="ceilometer-central-agent" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.354267 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerName="ceilometer-central-agent" Jan 29 11:18:22 crc kubenswrapper[4593]: E0129 11:18:22.354290 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="911edffc-f4d0-40bf-b49c-c1ab592dd258" containerName="glance-httpd" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.354298 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="911edffc-f4d0-40bf-b49c-c1ab592dd258" containerName="glance-httpd" Jan 29 11:18:22 crc kubenswrapper[4593]: E0129 11:18:22.354317 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95847704-1027-4518-9f5c-cd663496b804" containerName="cinder-api-log" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.354324 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="95847704-1027-4518-9f5c-cd663496b804" containerName="cinder-api-log" Jan 29 11:18:22 crc kubenswrapper[4593]: E0129 11:18:22.354339 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerName="proxy-httpd" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.354356 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerName="proxy-httpd" Jan 29 11:18:22 crc kubenswrapper[4593]: E0129 11:18:22.354368 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerName="ceilometer-notification-agent" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.354374 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerName="ceilometer-notification-agent" Jan 29 11:18:22 crc kubenswrapper[4593]: E0129 11:18:22.354386 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95847704-1027-4518-9f5c-cd663496b804" containerName="cinder-api" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.354392 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="95847704-1027-4518-9f5c-cd663496b804" containerName="cinder-api" Jan 29 11:18:22 crc kubenswrapper[4593]: E0129 11:18:22.354404 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="911edffc-f4d0-40bf-b49c-c1ab592dd258" containerName="glance-log" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.354411 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="911edffc-f4d0-40bf-b49c-c1ab592dd258" containerName="glance-log" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.354567 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="95847704-1027-4518-9f5c-cd663496b804" containerName="cinder-api-log" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.354580 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerName="proxy-httpd" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.354588 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="911edffc-f4d0-40bf-b49c-c1ab592dd258" containerName="glance-log" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.354603 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerName="sg-core" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.354614 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerName="ceilometer-central-agent" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.354625 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerName="ceilometer-notification-agent" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.354666 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="911edffc-f4d0-40bf-b49c-c1ab592dd258" containerName="glance-httpd" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.354675 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="95847704-1027-4518-9f5c-cd663496b804" containerName="cinder-api" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.356189 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.360392 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.360624 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.385215 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.403679 4593 scope.go:117] "RemoveContainer" containerID="a9f1fe703de62c9906cf5414628cb1871967b692dd15c7ec296d4900c7151a67" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.411690 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.431456 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsjrw\" (UniqueName: \"kubernetes.io/projected/78ec86eb-f94b-4f7f-83f0-30c10fd87869-kube-api-access-xsjrw\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.431536 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.431560 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ec86eb-f94b-4f7f-83f0-30c10fd87869-log-httpd\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.431587 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.431602 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-scripts\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.431656 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ec86eb-f94b-4f7f-83f0-30c10fd87869-run-httpd\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.431696 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-config-data\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.433882 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.458704 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.482166 4593 scope.go:117] "RemoveContainer" containerID="964d34df183e187ec805f4ff554355a6b6ef2fc5d1f44b5ea4d74d26a5c58cdc" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.492718 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.526250 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.527825 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.530189 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.534607 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsjrw\" (UniqueName: \"kubernetes.io/projected/78ec86eb-f94b-4f7f-83f0-30c10fd87869-kube-api-access-xsjrw\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.534700 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.534724 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ec86eb-f94b-4f7f-83f0-30c10fd87869-log-httpd\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.534756 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-scripts\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.534771 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.534834 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ec86eb-f94b-4f7f-83f0-30c10fd87869-run-httpd\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.534878 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-config-data\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.539332 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ec86eb-f94b-4f7f-83f0-30c10fd87869-log-httpd\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.540712 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.541203 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ec86eb-f94b-4f7f-83f0-30c10fd87869-run-httpd\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.549619 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.551240 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-scripts\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.565257 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.581608 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-config-data\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.592204 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsjrw\" (UniqueName: \"kubernetes.io/projected/78ec86eb-f94b-4f7f-83f0-30c10fd87869-kube-api-access-xsjrw\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.636178 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f0192e-509d-46a4-9a2a-c82106019381-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.636246 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f0192e-509d-46a4-9a2a-c82106019381-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.636288 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.636321 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f0192e-509d-46a4-9a2a-c82106019381-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.636351 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4f0192e-509d-46a4-9a2a-c82106019381-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.636384 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f0192e-509d-46a4-9a2a-c82106019381-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.636416 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdcdw\" (UniqueName: \"kubernetes.io/projected/c4f0192e-509d-46a4-9a2a-c82106019381-kube-api-access-gdcdw\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.636462 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4f0192e-509d-46a4-9a2a-c82106019381-logs\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.664669 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.666309 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.686574 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.687336 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.687491 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.704881 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.739861 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.748505 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.751765 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f0192e-509d-46a4-9a2a-c82106019381-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.751878 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4f0192e-509d-46a4-9a2a-c82106019381-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.751967 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcsqs\" (UniqueName: \"kubernetes.io/projected/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-kube-api-access-tcsqs\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.752002 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f0192e-509d-46a4-9a2a-c82106019381-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.752059 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-scripts\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.752112 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdcdw\" (UniqueName: \"kubernetes.io/projected/c4f0192e-509d-46a4-9a2a-c82106019381-kube-api-access-gdcdw\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.752269 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4f0192e-509d-46a4-9a2a-c82106019381-logs\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.752314 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.752345 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.752406 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f0192e-509d-46a4-9a2a-c82106019381-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.752498 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f0192e-509d-46a4-9a2a-c82106019381-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.752553 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-config-data-custom\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.752601 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-logs\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.752701 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.752736 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.752776 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-config-data\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.764219 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4f0192e-509d-46a4-9a2a-c82106019381-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.766256 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.768945 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f0192e-509d-46a4-9a2a-c82106019381-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.769918 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4f0192e-509d-46a4-9a2a-c82106019381-logs\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.771336 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f0192e-509d-46a4-9a2a-c82106019381-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.775725 4593 scope.go:117] "RemoveContainer" containerID="d9d7dd8976380d6486fd1b5f21789a9b38a5817e8ac2103c8d17ab8df8f5fe64" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.779894 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.781819 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f0192e-509d-46a4-9a2a-c82106019381-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.787291 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f0192e-509d-46a4-9a2a-c82106019381-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.813299 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdcdw\" (UniqueName: \"kubernetes.io/projected/c4f0192e-509d-46a4-9a2a-c82106019381-kube-api-access-gdcdw\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.857012 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.857075 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.857137 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-config-data-custom\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.857165 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-logs\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.857194 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.857214 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-config-data\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.857235 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.857281 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcsqs\" (UniqueName: \"kubernetes.io/projected/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-kube-api-access-tcsqs\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.857311 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-scripts\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.858873 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-logs\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.866734 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.876051 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.877958 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-scripts\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.878432 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.885250 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.885364 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-config-data-custom\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.901094 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-config-data\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.904772 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.905863 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcsqs\" (UniqueName: \"kubernetes.io/projected/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-kube-api-access-tcsqs\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.932448 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.033552 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.111035 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" path="/var/lib/kubelet/pods/852a4805-5ddc-4a1d-a642-9d5e6bbb9206/volumes" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.112166 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="911edffc-f4d0-40bf-b49c-c1ab592dd258" path="/var/lib/kubelet/pods/911edffc-f4d0-40bf-b49c-c1ab592dd258/volumes" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.113559 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95847704-1027-4518-9f5c-cd663496b804" path="/var/lib/kubelet/pods/95847704-1027-4518-9f5c-cd663496b804/volumes" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.297808 4593 generic.go:334] "Generic (PLEG): container finished" podID="7289daaa-acda-4854-a506-c6cc429562d3" containerID="3293c2e1edd54e8ff7f4dc2cefd7cf058a429e32cd917bd68da12dc400ead3f5" exitCode=0 Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.298180 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7289daaa-acda-4854-a506-c6cc429562d3","Type":"ContainerDied","Data":"3293c2e1edd54e8ff7f4dc2cefd7cf058a429e32cd917bd68da12dc400ead3f5"} Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.312677 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="95847704-1027-4518-9f5c-cd663496b804" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.163:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.330080 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58d6d94967-wdzcg" event={"ID":"f1bc6621-0892-452c-9f95-54554f8c6e68","Type":"ContainerStarted","Data":"3a88b331aa6b8c8e95781edc38ffd4762f674838fa864fc8b53fe87a5d08785f"} Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.330152 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58d6d94967-wdzcg" event={"ID":"f1bc6621-0892-452c-9f95-54554f8c6e68","Type":"ContainerStarted","Data":"922c276c74b50b8fe632937198b2477a6a9b17d827dc74f6a75da896a0452cf2"} Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.476789 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.491702 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5xlg\" (UniqueName: \"kubernetes.io/projected/7289daaa-acda-4854-a506-c6cc429562d3-kube-api-access-p5xlg\") pod \"7289daaa-acda-4854-a506-c6cc429562d3\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.492689 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7289daaa-acda-4854-a506-c6cc429562d3-httpd-run\") pod \"7289daaa-acda-4854-a506-c6cc429562d3\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.492797 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"7289daaa-acda-4854-a506-c6cc429562d3\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.492896 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-combined-ca-bundle\") pod \"7289daaa-acda-4854-a506-c6cc429562d3\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.493207 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7289daaa-acda-4854-a506-c6cc429562d3-logs\") pod \"7289daaa-acda-4854-a506-c6cc429562d3\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.493481 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-public-tls-certs\") pod \"7289daaa-acda-4854-a506-c6cc429562d3\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.493606 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-scripts\") pod \"7289daaa-acda-4854-a506-c6cc429562d3\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.494038 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-config-data\") pod \"7289daaa-acda-4854-a506-c6cc429562d3\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.494983 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7289daaa-acda-4854-a506-c6cc429562d3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7289daaa-acda-4854-a506-c6cc429562d3" (UID: "7289daaa-acda-4854-a506-c6cc429562d3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.500901 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7289daaa-acda-4854-a506-c6cc429562d3-logs" (OuterVolumeSpecName: "logs") pod "7289daaa-acda-4854-a506-c6cc429562d3" (UID: "7289daaa-acda-4854-a506-c6cc429562d3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.532293 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-scripts" (OuterVolumeSpecName: "scripts") pod "7289daaa-acda-4854-a506-c6cc429562d3" (UID: "7289daaa-acda-4854-a506-c6cc429562d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.534856 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7289daaa-acda-4854-a506-c6cc429562d3-kube-api-access-p5xlg" (OuterVolumeSpecName: "kube-api-access-p5xlg") pod "7289daaa-acda-4854-a506-c6cc429562d3" (UID: "7289daaa-acda-4854-a506-c6cc429562d3"). InnerVolumeSpecName "kube-api-access-p5xlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.600563 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5xlg\" (UniqueName: \"kubernetes.io/projected/7289daaa-acda-4854-a506-c6cc429562d3-kube-api-access-p5xlg\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.602220 4593 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7289daaa-acda-4854-a506-c6cc429562d3-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.602342 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7289daaa-acda-4854-a506-c6cc429562d3-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.602426 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.609560 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "7289daaa-acda-4854-a506-c6cc429562d3" (UID: "7289daaa-acda-4854-a506-c6cc429562d3"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.688599 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:18:23 crc kubenswrapper[4593]: W0129 11:18:23.701378 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78ec86eb_f94b_4f7f_83f0_30c10fd87869.slice/crio-711c86a21fd2d816293a1020209e105bca7ea576e3a8136db02ca95eb6d35ea5 WatchSource:0}: Error finding container 711c86a21fd2d816293a1020209e105bca7ea576e3a8136db02ca95eb6d35ea5: Status 404 returned error can't find the container with id 711c86a21fd2d816293a1020209e105bca7ea576e3a8136db02ca95eb6d35ea5 Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.705389 4593 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.737027 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7289daaa-acda-4854-a506-c6cc429562d3" (UID: "7289daaa-acda-4854-a506-c6cc429562d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.808111 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.836900 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7289daaa-acda-4854-a506-c6cc429562d3" (UID: "7289daaa-acda-4854-a506-c6cc429562d3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.837308 4593 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.839975 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-config-data" (OuterVolumeSpecName: "config-data") pod "7289daaa-acda-4854-a506-c6cc429562d3" (UID: "7289daaa-acda-4854-a506-c6cc429562d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.910891 4593 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.910919 4593 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.910929 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.017601 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.185545 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.359656 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7289daaa-acda-4854-a506-c6cc429562d3","Type":"ContainerDied","Data":"db9797e87c1781dc943e7d1006dfa6fe3eaaf5edc0bffd04dc66ed3f512449a4"} Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.359706 4593 scope.go:117] "RemoveContainer" containerID="3293c2e1edd54e8ff7f4dc2cefd7cf058a429e32cd917bd68da12dc400ead3f5" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.359851 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.370443 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4f0192e-509d-46a4-9a2a-c82106019381","Type":"ContainerStarted","Data":"611919187e7b8eab13192430a2187608d9df802c0e23e7889c0cb34217e85d57"} Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.381335 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78ec86eb-f94b-4f7f-83f0-30c10fd87869","Type":"ContainerStarted","Data":"711c86a21fd2d816293a1020209e105bca7ea576e3a8136db02ca95eb6d35ea5"} Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.389132 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58d6d94967-wdzcg" event={"ID":"f1bc6621-0892-452c-9f95-54554f8c6e68","Type":"ContainerStarted","Data":"f28d6a5450433f62a199b081a96fe4301a0493157d5b32a045c0f3fd0f981f35"} Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.390107 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.390144 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.414558 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.426994 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef","Type":"ContainerStarted","Data":"9b160d8d81e046e2cdee4c9713209e91ef7045d98f9716a3994e613efb141f42"} Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.434547 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.453871 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-58d6d94967-wdzcg" podStartSLOduration=9.453840316 podStartE2EDuration="9.453840316s" podCreationTimestamp="2026-01-29 11:18:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:18:24.441537122 +0000 UTC m=+1170.314571313" watchObservedRunningTime="2026-01-29 11:18:24.453840316 +0000 UTC m=+1170.326874507" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.478725 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:18:24 crc kubenswrapper[4593]: E0129 11:18:24.479407 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7289daaa-acda-4854-a506-c6cc429562d3" containerName="glance-httpd" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.479434 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="7289daaa-acda-4854-a506-c6cc429562d3" containerName="glance-httpd" Jan 29 11:18:24 crc kubenswrapper[4593]: E0129 11:18:24.479486 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7289daaa-acda-4854-a506-c6cc429562d3" containerName="glance-log" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.479497 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="7289daaa-acda-4854-a506-c6cc429562d3" containerName="glance-log" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.479700 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="7289daaa-acda-4854-a506-c6cc429562d3" containerName="glance-log" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.479722 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="7289daaa-acda-4854-a506-c6cc429562d3" containerName="glance-httpd" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.481217 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.485698 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.486067 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.530160 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43872652-3bb2-4a5c-9b13-cb25d625cd01-config-data\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.530204 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.530484 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43872652-3bb2-4a5c-9b13-cb25d625cd01-logs\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.530564 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43872652-3bb2-4a5c-9b13-cb25d625cd01-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.530596 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43872652-3bb2-4a5c-9b13-cb25d625cd01-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.530656 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43872652-3bb2-4a5c-9b13-cb25d625cd01-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.530759 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43872652-3bb2-4a5c-9b13-cb25d625cd01-scripts\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.530809 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h42x\" (UniqueName: \"kubernetes.io/projected/43872652-3bb2-4a5c-9b13-cb25d625cd01-kube-api-access-7h42x\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.569763 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.593166 4593 scope.go:117] "RemoveContainer" containerID="90fb85235bc3606a7b4bb84b4b179cef3fafc0ce2eb0f3b29c3cc2eb08fb78b3" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.635110 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43872652-3bb2-4a5c-9b13-cb25d625cd01-logs\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.635192 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43872652-3bb2-4a5c-9b13-cb25d625cd01-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.635226 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43872652-3bb2-4a5c-9b13-cb25d625cd01-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.635256 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43872652-3bb2-4a5c-9b13-cb25d625cd01-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.635326 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43872652-3bb2-4a5c-9b13-cb25d625cd01-scripts\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.635363 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h42x\" (UniqueName: \"kubernetes.io/projected/43872652-3bb2-4a5c-9b13-cb25d625cd01-kube-api-access-7h42x\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.635466 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43872652-3bb2-4a5c-9b13-cb25d625cd01-config-data\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.635495 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.635496 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43872652-3bb2-4a5c-9b13-cb25d625cd01-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.635725 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.638511 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43872652-3bb2-4a5c-9b13-cb25d625cd01-logs\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.646007 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43872652-3bb2-4a5c-9b13-cb25d625cd01-scripts\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.668469 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h42x\" (UniqueName: \"kubernetes.io/projected/43872652-3bb2-4a5c-9b13-cb25d625cd01-kube-api-access-7h42x\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.678670 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43872652-3bb2-4a5c-9b13-cb25d625cd01-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.740609 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43872652-3bb2-4a5c-9b13-cb25d625cd01-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.742072 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43872652-3bb2-4a5c-9b13-cb25d625cd01-config-data\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.769260 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.866501 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.909479 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.910453 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:18:25 crc kubenswrapper[4593]: I0129 11:18:25.049700 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:18:25 crc kubenswrapper[4593]: I0129 11:18:25.050712 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:18:25 crc kubenswrapper[4593]: I0129 11:18:25.125239 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7289daaa-acda-4854-a506-c6cc429562d3" path="/var/lib/kubelet/pods/7289daaa-acda-4854-a506-c6cc429562d3/volumes" Jan 29 11:18:25 crc kubenswrapper[4593]: I0129 11:18:25.920936 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:18:25 crc kubenswrapper[4593]: W0129 11:18:25.931971 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43872652_3bb2_4a5c_9b13_cb25d625cd01.slice/crio-f8bc87ac1147d47e54367d1feb5ba989a8c026f389393b513858ddbd441d28a8 WatchSource:0}: Error finding container f8bc87ac1147d47e54367d1feb5ba989a8c026f389393b513858ddbd441d28a8: Status 404 returned error can't find the container with id f8bc87ac1147d47e54367d1feb5ba989a8c026f389393b513858ddbd441d28a8 Jan 29 11:18:26 crc kubenswrapper[4593]: I0129 11:18:26.524146 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78ec86eb-f94b-4f7f-83f0-30c10fd87869","Type":"ContainerStarted","Data":"8adbf800017e7b31f1fd44ae480d6741ac17edd3f5775a9606efde18534450ba"} Jan 29 11:18:26 crc kubenswrapper[4593]: I0129 11:18:26.525338 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef","Type":"ContainerStarted","Data":"0f4ba927110e42f4575d57fa22b020fb5f291b538c1cb3b4b67bbdeb4239900e"} Jan 29 11:18:26 crc kubenswrapper[4593]: I0129 11:18:26.526371 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4f0192e-509d-46a4-9a2a-c82106019381","Type":"ContainerStarted","Data":"99ffedd87bdad963c0fac83d916ef7a3dfa821991254407dd583eb4da850308a"} Jan 29 11:18:26 crc kubenswrapper[4593]: I0129 11:18:26.528087 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"43872652-3bb2-4a5c-9b13-cb25d625cd01","Type":"ContainerStarted","Data":"f8bc87ac1147d47e54367d1feb5ba989a8c026f389393b513858ddbd441d28a8"} Jan 29 11:18:27 crc kubenswrapper[4593]: I0129 11:18:27.547028 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4f0192e-509d-46a4-9a2a-c82106019381","Type":"ContainerStarted","Data":"4c05dcb5cd7f81485fe4d9e1347db0f5e68c055073e01d377db0e1d469245ae3"} Jan 29 11:18:27 crc kubenswrapper[4593]: I0129 11:18:27.551475 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78ec86eb-f94b-4f7f-83f0-30c10fd87869","Type":"ContainerStarted","Data":"58063701298bf589828142a33efbc5e270766b9014738cc0aed3ba734c80bdaf"} Jan 29 11:18:28 crc kubenswrapper[4593]: I0129 11:18:28.037145 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-58d6d94967-wdzcg" podUID="f1bc6621-0892-452c-9f95-54554f8c6e68" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 29 11:18:28 crc kubenswrapper[4593]: I0129 11:18:28.045722 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-58d6d94967-wdzcg" podUID="f1bc6621-0892-452c-9f95-54554f8c6e68" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 29 11:18:28 crc kubenswrapper[4593]: I0129 11:18:28.227129 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:18:28 crc kubenswrapper[4593]: I0129 11:18:28.428088 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 11:18:28 crc kubenswrapper[4593]: I0129 11:18:28.428386 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="1512a75d-a403-420b-a9be-f931faf1273a" containerName="kube-state-metrics" containerID="cri-o://86bc440cb31e485f009e115ffa955e35cb29cedb22292b6665d6526a008cafe4" gracePeriod=30 Jan 29 11:18:28 crc kubenswrapper[4593]: I0129 11:18:28.585538 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"43872652-3bb2-4a5c-9b13-cb25d625cd01","Type":"ContainerStarted","Data":"544524f295bee87031c7a71defb576c27cc4dcaa1ba684a41c30b9be9bac1142"} Jan 29 11:18:28 crc kubenswrapper[4593]: I0129 11:18:28.599370 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78ec86eb-f94b-4f7f-83f0-30c10fd87869","Type":"ContainerStarted","Data":"833d07115f2510a2b5c1750bdceda27a12d681e5f9d78bc2a2a1e6ac0a401996"} Jan 29 11:18:28 crc kubenswrapper[4593]: I0129 11:18:28.606960 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef","Type":"ContainerStarted","Data":"9231199f33065bde95f80e5a36be406be2d308a0f6901f81b0b5c94971e920e5"} Jan 29 11:18:28 crc kubenswrapper[4593]: I0129 11:18:28.606998 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 11:18:28 crc kubenswrapper[4593]: I0129 11:18:28.751216 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.751192701 podStartE2EDuration="6.751192701s" podCreationTimestamp="2026-01-29 11:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:18:28.73492664 +0000 UTC m=+1174.607960831" watchObservedRunningTime="2026-01-29 11:18:28.751192701 +0000 UTC m=+1174.624226892" Jan 29 11:18:28 crc kubenswrapper[4593]: I0129 11:18:28.805840 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.8058103899999995 podStartE2EDuration="6.80581039s" podCreationTimestamp="2026-01-29 11:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:18:28.780713771 +0000 UTC m=+1174.653747972" watchObservedRunningTime="2026-01-29 11:18:28.80581039 +0000 UTC m=+1174.678844581" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.305038 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.375358 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsks2\" (UniqueName: \"kubernetes.io/projected/1512a75d-a403-420b-a9be-f931faf1273a-kube-api-access-fsks2\") pod \"1512a75d-a403-420b-a9be-f931faf1273a\" (UID: \"1512a75d-a403-420b-a9be-f931faf1273a\") " Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.383035 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1512a75d-a403-420b-a9be-f931faf1273a-kube-api-access-fsks2" (OuterVolumeSpecName: "kube-api-access-fsks2") pod "1512a75d-a403-420b-a9be-f931faf1273a" (UID: "1512a75d-a403-420b-a9be-f931faf1273a"). InnerVolumeSpecName "kube-api-access-fsks2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.477999 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsks2\" (UniqueName: \"kubernetes.io/projected/1512a75d-a403-420b-a9be-f931faf1273a-kube-api-access-fsks2\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.653588 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"43872652-3bb2-4a5c-9b13-cb25d625cd01","Type":"ContainerStarted","Data":"49e8574d8790b66f47ddc46c109214e8113927c00b90278fd8fb5f822d2ca25c"} Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.670334 4593 generic.go:334] "Generic (PLEG): container finished" podID="1512a75d-a403-420b-a9be-f931faf1273a" containerID="86bc440cb31e485f009e115ffa955e35cb29cedb22292b6665d6526a008cafe4" exitCode=2 Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.670442 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1512a75d-a403-420b-a9be-f931faf1273a","Type":"ContainerDied","Data":"86bc440cb31e485f009e115ffa955e35cb29cedb22292b6665d6526a008cafe4"} Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.670515 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1512a75d-a403-420b-a9be-f931faf1273a","Type":"ContainerDied","Data":"a9c985edeb4a844ebb330990ed11e56a44761422347a56b0c3bd545f3f8f0fc2"} Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.670539 4593 scope.go:117] "RemoveContainer" containerID="86bc440cb31e485f009e115ffa955e35cb29cedb22292b6665d6526a008cafe4" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.670837 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.693292 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.693264353 podStartE2EDuration="5.693264353s" podCreationTimestamp="2026-01-29 11:18:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:18:29.681132245 +0000 UTC m=+1175.554166436" watchObservedRunningTime="2026-01-29 11:18:29.693264353 +0000 UTC m=+1175.566298544" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.735106 4593 scope.go:117] "RemoveContainer" containerID="86bc440cb31e485f009e115ffa955e35cb29cedb22292b6665d6526a008cafe4" Jan 29 11:18:29 crc kubenswrapper[4593]: E0129 11:18:29.738409 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86bc440cb31e485f009e115ffa955e35cb29cedb22292b6665d6526a008cafe4\": container with ID starting with 86bc440cb31e485f009e115ffa955e35cb29cedb22292b6665d6526a008cafe4 not found: ID does not exist" containerID="86bc440cb31e485f009e115ffa955e35cb29cedb22292b6665d6526a008cafe4" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.738761 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86bc440cb31e485f009e115ffa955e35cb29cedb22292b6665d6526a008cafe4"} err="failed to get container status \"86bc440cb31e485f009e115ffa955e35cb29cedb22292b6665d6526a008cafe4\": rpc error: code = NotFound desc = could not find container \"86bc440cb31e485f009e115ffa955e35cb29cedb22292b6665d6526a008cafe4\": container with ID starting with 86bc440cb31e485f009e115ffa955e35cb29cedb22292b6665d6526a008cafe4 not found: ID does not exist" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.745000 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.760796 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.770709 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 11:18:29 crc kubenswrapper[4593]: E0129 11:18:29.776796 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1512a75d-a403-420b-a9be-f931faf1273a" containerName="kube-state-metrics" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.776832 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="1512a75d-a403-420b-a9be-f931faf1273a" containerName="kube-state-metrics" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.777057 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="1512a75d-a403-420b-a9be-f931faf1273a" containerName="kube-state-metrics" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.777697 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.783617 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.793435 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.794165 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.886788 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d0c0ba2-e8ed-4361-8aff-e71714a1617c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6d0c0ba2-e8ed-4361-8aff-e71714a1617c\") " pod="openstack/kube-state-metrics-0" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.886844 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d0c0ba2-e8ed-4361-8aff-e71714a1617c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6d0c0ba2-e8ed-4361-8aff-e71714a1617c\") " pod="openstack/kube-state-metrics-0" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.886987 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sbk9\" (UniqueName: \"kubernetes.io/projected/6d0c0ba2-e8ed-4361-8aff-e71714a1617c-kube-api-access-4sbk9\") pod \"kube-state-metrics-0\" (UID: \"6d0c0ba2-e8ed-4361-8aff-e71714a1617c\") " pod="openstack/kube-state-metrics-0" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.887261 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6d0c0ba2-e8ed-4361-8aff-e71714a1617c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6d0c0ba2-e8ed-4361-8aff-e71714a1617c\") " pod="openstack/kube-state-metrics-0" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.989261 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d0c0ba2-e8ed-4361-8aff-e71714a1617c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6d0c0ba2-e8ed-4361-8aff-e71714a1617c\") " pod="openstack/kube-state-metrics-0" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.989330 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d0c0ba2-e8ed-4361-8aff-e71714a1617c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6d0c0ba2-e8ed-4361-8aff-e71714a1617c\") " pod="openstack/kube-state-metrics-0" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.989382 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sbk9\" (UniqueName: \"kubernetes.io/projected/6d0c0ba2-e8ed-4361-8aff-e71714a1617c-kube-api-access-4sbk9\") pod \"kube-state-metrics-0\" (UID: \"6d0c0ba2-e8ed-4361-8aff-e71714a1617c\") " pod="openstack/kube-state-metrics-0" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.989465 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6d0c0ba2-e8ed-4361-8aff-e71714a1617c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6d0c0ba2-e8ed-4361-8aff-e71714a1617c\") " pod="openstack/kube-state-metrics-0" Jan 29 11:18:30 crc kubenswrapper[4593]: I0129 11:18:29.996337 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6d0c0ba2-e8ed-4361-8aff-e71714a1617c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6d0c0ba2-e8ed-4361-8aff-e71714a1617c\") " pod="openstack/kube-state-metrics-0" Jan 29 11:18:30 crc kubenswrapper[4593]: I0129 11:18:30.007424 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sbk9\" (UniqueName: \"kubernetes.io/projected/6d0c0ba2-e8ed-4361-8aff-e71714a1617c-kube-api-access-4sbk9\") pod \"kube-state-metrics-0\" (UID: \"6d0c0ba2-e8ed-4361-8aff-e71714a1617c\") " pod="openstack/kube-state-metrics-0" Jan 29 11:18:30 crc kubenswrapper[4593]: I0129 11:18:30.013237 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d0c0ba2-e8ed-4361-8aff-e71714a1617c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6d0c0ba2-e8ed-4361-8aff-e71714a1617c\") " pod="openstack/kube-state-metrics-0" Jan 29 11:18:30 crc kubenswrapper[4593]: I0129 11:18:30.015740 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d0c0ba2-e8ed-4361-8aff-e71714a1617c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6d0c0ba2-e8ed-4361-8aff-e71714a1617c\") " pod="openstack/kube-state-metrics-0" Jan 29 11:18:30 crc kubenswrapper[4593]: I0129 11:18:30.121253 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 11:18:30 crc kubenswrapper[4593]: I0129 11:18:30.748237 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 11:18:31 crc kubenswrapper[4593]: I0129 11:18:31.034179 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:31 crc kubenswrapper[4593]: I0129 11:18:31.040512 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:31 crc kubenswrapper[4593]: I0129 11:18:31.087361 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1512a75d-a403-420b-a9be-f931faf1273a" path="/var/lib/kubelet/pods/1512a75d-a403-420b-a9be-f931faf1273a/volumes" Jan 29 11:18:31 crc kubenswrapper[4593]: I0129 11:18:31.142400 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:18:31 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:18:31 crc kubenswrapper[4593]: > Jan 29 11:18:31 crc kubenswrapper[4593]: I0129 11:18:31.690018 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6d0c0ba2-e8ed-4361-8aff-e71714a1617c","Type":"ContainerStarted","Data":"005ccb1e86c96c8065cec7df499a3e3c287f9afa66306410ebb021bd06437715"} Jan 29 11:18:32 crc kubenswrapper[4593]: I0129 11:18:32.399527 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:32 crc kubenswrapper[4593]: I0129 11:18:32.486128 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5dc77db4b8-s2bq6"] Jan 29 11:18:32 crc kubenswrapper[4593]: I0129 11:18:32.486580 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5dc77db4b8-s2bq6" podUID="df8e6616-b9af-427f-9daa-d62ee3cb24d3" containerName="neutron-api" containerID="cri-o://09e3428cd83e854d7603f9f23c1fc803bfbc3479156a4044437b5fa34689606a" gracePeriod=30 Jan 29 11:18:32 crc kubenswrapper[4593]: I0129 11:18:32.487096 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5dc77db4b8-s2bq6" podUID="df8e6616-b9af-427f-9daa-d62ee3cb24d3" containerName="neutron-httpd" containerID="cri-o://ea1f5b0da7cda5576a556da562bab910500bb22fc10f44670339b87aed033fff" gracePeriod=30 Jan 29 11:18:32 crc kubenswrapper[4593]: I0129 11:18:32.732987 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78ec86eb-f94b-4f7f-83f0-30c10fd87869","Type":"ContainerStarted","Data":"a0debea9525856c91778e0843228bff0de041b9da05ea78e3bab22062439fe51"} Jan 29 11:18:32 crc kubenswrapper[4593]: I0129 11:18:32.733672 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 11:18:32 crc kubenswrapper[4593]: I0129 11:18:32.742587 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6d0c0ba2-e8ed-4361-8aff-e71714a1617c","Type":"ContainerStarted","Data":"1680d182c5e7643ac7fecdecbd039a081e331c0fc039793d441768833bdfb2ad"} Jan 29 11:18:32 crc kubenswrapper[4593]: I0129 11:18:32.743742 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 11:18:32 crc kubenswrapper[4593]: I0129 11:18:32.772161 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.327027013 podStartE2EDuration="10.772130482s" podCreationTimestamp="2026-01-29 11:18:22 +0000 UTC" firstStartedPulling="2026-01-29 11:18:23.716343963 +0000 UTC m=+1169.589378154" lastFinishedPulling="2026-01-29 11:18:30.161447432 +0000 UTC m=+1176.034481623" observedRunningTime="2026-01-29 11:18:32.768980857 +0000 UTC m=+1178.642015048" watchObservedRunningTime="2026-01-29 11:18:32.772130482 +0000 UTC m=+1178.645164673" Jan 29 11:18:32 crc kubenswrapper[4593]: I0129 11:18:32.802148 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.301548546 podStartE2EDuration="3.802129554s" podCreationTimestamp="2026-01-29 11:18:29 +0000 UTC" firstStartedPulling="2026-01-29 11:18:30.757898864 +0000 UTC m=+1176.630933055" lastFinishedPulling="2026-01-29 11:18:32.258479872 +0000 UTC m=+1178.131514063" observedRunningTime="2026-01-29 11:18:32.796434089 +0000 UTC m=+1178.669468280" watchObservedRunningTime="2026-01-29 11:18:32.802129554 +0000 UTC m=+1178.675163745" Jan 29 11:18:33 crc kubenswrapper[4593]: I0129 11:18:33.034664 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 11:18:33 crc kubenswrapper[4593]: I0129 11:18:33.034937 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 11:18:33 crc kubenswrapper[4593]: I0129 11:18:33.087489 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 11:18:33 crc kubenswrapper[4593]: I0129 11:18:33.098241 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 11:18:33 crc kubenswrapper[4593]: I0129 11:18:33.754520 4593 generic.go:334] "Generic (PLEG): container finished" podID="df8e6616-b9af-427f-9daa-d62ee3cb24d3" containerID="ea1f5b0da7cda5576a556da562bab910500bb22fc10f44670339b87aed033fff" exitCode=0 Jan 29 11:18:33 crc kubenswrapper[4593]: I0129 11:18:33.754601 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dc77db4b8-s2bq6" event={"ID":"df8e6616-b9af-427f-9daa-d62ee3cb24d3","Type":"ContainerDied","Data":"ea1f5b0da7cda5576a556da562bab910500bb22fc10f44670339b87aed033fff"} Jan 29 11:18:33 crc kubenswrapper[4593]: I0129 11:18:33.755737 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 11:18:33 crc kubenswrapper[4593]: I0129 11:18:33.755760 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 11:18:34 crc kubenswrapper[4593]: I0129 11:18:34.079487 4593 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 11:18:34 crc kubenswrapper[4593]: I0129 11:18:34.671904 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:18:34 crc kubenswrapper[4593]: I0129 11:18:34.765763 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerName="ceilometer-central-agent" containerID="cri-o://8adbf800017e7b31f1fd44ae480d6741ac17edd3f5775a9606efde18534450ba" gracePeriod=30 Jan 29 11:18:34 crc kubenswrapper[4593]: I0129 11:18:34.766304 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerName="sg-core" containerID="cri-o://833d07115f2510a2b5c1750bdceda27a12d681e5f9d78bc2a2a1e6ac0a401996" gracePeriod=30 Jan 29 11:18:34 crc kubenswrapper[4593]: I0129 11:18:34.766412 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerName="proxy-httpd" containerID="cri-o://a0debea9525856c91778e0843228bff0de041b9da05ea78e3bab22062439fe51" gracePeriod=30 Jan 29 11:18:34 crc kubenswrapper[4593]: I0129 11:18:34.766442 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerName="ceilometer-notification-agent" containerID="cri-o://58063701298bf589828142a33efbc5e270766b9014738cc0aed3ba734c80bdaf" gracePeriod=30 Jan 29 11:18:34 crc kubenswrapper[4593]: I0129 11:18:34.868035 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 11:18:34 crc kubenswrapper[4593]: I0129 11:18:34.868084 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 11:18:34 crc kubenswrapper[4593]: I0129 11:18:34.911577 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 29 11:18:34 crc kubenswrapper[4593]: I0129 11:18:34.954526 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 11:18:34 crc kubenswrapper[4593]: I0129 11:18:34.956972 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 11:18:35 crc kubenswrapper[4593]: I0129 11:18:35.051613 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5bdffb4784-5zp8q" podUID="be4a01cd-2eb7-48e8-8a7e-eb02f8851188" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 29 11:18:35 crc kubenswrapper[4593]: I0129 11:18:35.783859 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"220bdfcb-98c4-4c78-8d95-ea64edfaf1ab","Type":"ContainerStarted","Data":"7186c53b99e322b4e59d65a8c7470388e891fc309cdd4c8518722936e8a9f732"} Jan 29 11:18:35 crc kubenswrapper[4593]: I0129 11:18:35.788031 4593 generic.go:334] "Generic (PLEG): container finished" podID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerID="a0debea9525856c91778e0843228bff0de041b9da05ea78e3bab22062439fe51" exitCode=0 Jan 29 11:18:35 crc kubenswrapper[4593]: I0129 11:18:35.788070 4593 generic.go:334] "Generic (PLEG): container finished" podID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerID="833d07115f2510a2b5c1750bdceda27a12d681e5f9d78bc2a2a1e6ac0a401996" exitCode=2 Jan 29 11:18:35 crc kubenswrapper[4593]: I0129 11:18:35.788081 4593 generic.go:334] "Generic (PLEG): container finished" podID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerID="58063701298bf589828142a33efbc5e270766b9014738cc0aed3ba734c80bdaf" exitCode=0 Jan 29 11:18:35 crc kubenswrapper[4593]: I0129 11:18:35.788622 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78ec86eb-f94b-4f7f-83f0-30c10fd87869","Type":"ContainerDied","Data":"a0debea9525856c91778e0843228bff0de041b9da05ea78e3bab22062439fe51"} Jan 29 11:18:35 crc kubenswrapper[4593]: I0129 11:18:35.788670 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 11:18:35 crc kubenswrapper[4593]: I0129 11:18:35.788684 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78ec86eb-f94b-4f7f-83f0-30c10fd87869","Type":"ContainerDied","Data":"833d07115f2510a2b5c1750bdceda27a12d681e5f9d78bc2a2a1e6ac0a401996"} Jan 29 11:18:35 crc kubenswrapper[4593]: I0129 11:18:35.788694 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78ec86eb-f94b-4f7f-83f0-30c10fd87869","Type":"ContainerDied","Data":"58063701298bf589828142a33efbc5e270766b9014738cc0aed3ba734c80bdaf"} Jan 29 11:18:35 crc kubenswrapper[4593]: I0129 11:18:35.788793 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 11:18:35 crc kubenswrapper[4593]: I0129 11:18:35.803319 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.906812275 podStartE2EDuration="40.803293458s" podCreationTimestamp="2026-01-29 11:17:55 +0000 UTC" firstStartedPulling="2026-01-29 11:17:56.667922955 +0000 UTC m=+1142.540957146" lastFinishedPulling="2026-01-29 11:18:34.564404138 +0000 UTC m=+1180.437438329" observedRunningTime="2026-01-29 11:18:35.797019908 +0000 UTC m=+1181.670054099" watchObservedRunningTime="2026-01-29 11:18:35.803293458 +0000 UTC m=+1181.676327649" Jan 29 11:18:37 crc kubenswrapper[4593]: I0129 11:18:37.939858 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="c7ea14af-5b7c-44d6-a34c-1a344bfc96ef" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.174:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:18:39 crc kubenswrapper[4593]: I0129 11:18:39.938916 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="c7ea14af-5b7c-44d6-a34c-1a344bfc96ef" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.174:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:18:40 crc kubenswrapper[4593]: I0129 11:18:40.134099 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.121846 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:18:41 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:18:41 crc kubenswrapper[4593]: > Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.403673 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.444134 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-sg-core-conf-yaml\") pod \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.444248 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-combined-ca-bundle\") pod \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.444301 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ec86eb-f94b-4f7f-83f0-30c10fd87869-run-httpd\") pod \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.444430 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-config-data\") pod \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.444480 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-scripts\") pod \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.444549 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsjrw\" (UniqueName: \"kubernetes.io/projected/78ec86eb-f94b-4f7f-83f0-30c10fd87869-kube-api-access-xsjrw\") pod \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.444606 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ec86eb-f94b-4f7f-83f0-30c10fd87869-log-httpd\") pod \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.445256 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78ec86eb-f94b-4f7f-83f0-30c10fd87869-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "78ec86eb-f94b-4f7f-83f0-30c10fd87869" (UID: "78ec86eb-f94b-4f7f-83f0-30c10fd87869"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.445402 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78ec86eb-f94b-4f7f-83f0-30c10fd87869-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "78ec86eb-f94b-4f7f-83f0-30c10fd87869" (UID: "78ec86eb-f94b-4f7f-83f0-30c10fd87869"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.455615 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78ec86eb-f94b-4f7f-83f0-30c10fd87869-kube-api-access-xsjrw" (OuterVolumeSpecName: "kube-api-access-xsjrw") pod "78ec86eb-f94b-4f7f-83f0-30c10fd87869" (UID: "78ec86eb-f94b-4f7f-83f0-30c10fd87869"). InnerVolumeSpecName "kube-api-access-xsjrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.460768 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-scripts" (OuterVolumeSpecName: "scripts") pod "78ec86eb-f94b-4f7f-83f0-30c10fd87869" (UID: "78ec86eb-f94b-4f7f-83f0-30c10fd87869"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.556701 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "78ec86eb-f94b-4f7f-83f0-30c10fd87869" (UID: "78ec86eb-f94b-4f7f-83f0-30c10fd87869"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.592045 4593 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ec86eb-f94b-4f7f-83f0-30c10fd87869-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.592092 4593 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ec86eb-f94b-4f7f-83f0-30c10fd87869-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.592117 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.592131 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsjrw\" (UniqueName: \"kubernetes.io/projected/78ec86eb-f94b-4f7f-83f0-30c10fd87869-kube-api-access-xsjrw\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.690779 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78ec86eb-f94b-4f7f-83f0-30c10fd87869" (UID: "78ec86eb-f94b-4f7f-83f0-30c10fd87869"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.694892 4593 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.694976 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.721205 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-config-data" (OuterVolumeSpecName: "config-data") pod "78ec86eb-f94b-4f7f-83f0-30c10fd87869" (UID: "78ec86eb-f94b-4f7f-83f0-30c10fd87869"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.796409 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.852323 4593 generic.go:334] "Generic (PLEG): container finished" podID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerID="8adbf800017e7b31f1fd44ae480d6741ac17edd3f5775a9606efde18534450ba" exitCode=0 Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.852370 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78ec86eb-f94b-4f7f-83f0-30c10fd87869","Type":"ContainerDied","Data":"8adbf800017e7b31f1fd44ae480d6741ac17edd3f5775a9606efde18534450ba"} Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.852397 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78ec86eb-f94b-4f7f-83f0-30c10fd87869","Type":"ContainerDied","Data":"711c86a21fd2d816293a1020209e105bca7ea576e3a8136db02ca95eb6d35ea5"} Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.852418 4593 scope.go:117] "RemoveContainer" containerID="a0debea9525856c91778e0843228bff0de041b9da05ea78e3bab22062439fe51" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.852589 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.882372 4593 scope.go:117] "RemoveContainer" containerID="833d07115f2510a2b5c1750bdceda27a12d681e5f9d78bc2a2a1e6ac0a401996" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.892078 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.902340 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.918327 4593 scope.go:117] "RemoveContainer" containerID="58063701298bf589828142a33efbc5e270766b9014738cc0aed3ba734c80bdaf" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.937397 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:18:41 crc kubenswrapper[4593]: E0129 11:18:41.937864 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerName="ceilometer-notification-agent" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.937889 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerName="ceilometer-notification-agent" Jan 29 11:18:41 crc kubenswrapper[4593]: E0129 11:18:41.937909 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerName="sg-core" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.937918 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerName="sg-core" Jan 29 11:18:41 crc kubenswrapper[4593]: E0129 11:18:41.937930 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerName="proxy-httpd" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.937938 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerName="proxy-httpd" Jan 29 11:18:41 crc kubenswrapper[4593]: E0129 11:18:41.937953 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerName="ceilometer-central-agent" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.937961 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerName="ceilometer-central-agent" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.938219 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerName="proxy-httpd" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.938256 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerName="ceilometer-central-agent" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.938272 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerName="ceilometer-notification-agent" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.938288 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerName="sg-core" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.940606 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.949529 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.949897 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.954850 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.958582 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.968875 4593 scope.go:117] "RemoveContainer" containerID="8adbf800017e7b31f1fd44ae480d6741ac17edd3f5775a9606efde18534450ba" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.000059 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-scripts\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.000104 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.000184 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.000206 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-config-data\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.000243 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nklq7\" (UniqueName: \"kubernetes.io/projected/37dd6241-1218-4994-9fa1-75062ec38165-kube-api-access-nklq7\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.000280 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dd6241-1218-4994-9fa1-75062ec38165-run-httpd\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.000316 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.000375 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dd6241-1218-4994-9fa1-75062ec38165-log-httpd\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.100904 4593 scope.go:117] "RemoveContainer" containerID="a0debea9525856c91778e0843228bff0de041b9da05ea78e3bab22062439fe51" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.102226 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-scripts\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.102839 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.102937 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.102960 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-config-data\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.103015 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nklq7\" (UniqueName: \"kubernetes.io/projected/37dd6241-1218-4994-9fa1-75062ec38165-kube-api-access-nklq7\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.103067 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dd6241-1218-4994-9fa1-75062ec38165-run-httpd\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.103122 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.103267 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dd6241-1218-4994-9fa1-75062ec38165-log-httpd\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.103700 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dd6241-1218-4994-9fa1-75062ec38165-log-httpd\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: E0129 11:18:42.105849 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0debea9525856c91778e0843228bff0de041b9da05ea78e3bab22062439fe51\": container with ID starting with a0debea9525856c91778e0843228bff0de041b9da05ea78e3bab22062439fe51 not found: ID does not exist" containerID="a0debea9525856c91778e0843228bff0de041b9da05ea78e3bab22062439fe51" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.105913 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0debea9525856c91778e0843228bff0de041b9da05ea78e3bab22062439fe51"} err="failed to get container status \"a0debea9525856c91778e0843228bff0de041b9da05ea78e3bab22062439fe51\": rpc error: code = NotFound desc = could not find container \"a0debea9525856c91778e0843228bff0de041b9da05ea78e3bab22062439fe51\": container with ID starting with a0debea9525856c91778e0843228bff0de041b9da05ea78e3bab22062439fe51 not found: ID does not exist" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.105943 4593 scope.go:117] "RemoveContainer" containerID="833d07115f2510a2b5c1750bdceda27a12d681e5f9d78bc2a2a1e6ac0a401996" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.107152 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dd6241-1218-4994-9fa1-75062ec38165-run-httpd\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: E0129 11:18:42.108692 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"833d07115f2510a2b5c1750bdceda27a12d681e5f9d78bc2a2a1e6ac0a401996\": container with ID starting with 833d07115f2510a2b5c1750bdceda27a12d681e5f9d78bc2a2a1e6ac0a401996 not found: ID does not exist" containerID="833d07115f2510a2b5c1750bdceda27a12d681e5f9d78bc2a2a1e6ac0a401996" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.108738 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"833d07115f2510a2b5c1750bdceda27a12d681e5f9d78bc2a2a1e6ac0a401996"} err="failed to get container status \"833d07115f2510a2b5c1750bdceda27a12d681e5f9d78bc2a2a1e6ac0a401996\": rpc error: code = NotFound desc = could not find container \"833d07115f2510a2b5c1750bdceda27a12d681e5f9d78bc2a2a1e6ac0a401996\": container with ID starting with 833d07115f2510a2b5c1750bdceda27a12d681e5f9d78bc2a2a1e6ac0a401996 not found: ID does not exist" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.108769 4593 scope.go:117] "RemoveContainer" containerID="58063701298bf589828142a33efbc5e270766b9014738cc0aed3ba734c80bdaf" Jan 29 11:18:42 crc kubenswrapper[4593]: E0129 11:18:42.109299 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58063701298bf589828142a33efbc5e270766b9014738cc0aed3ba734c80bdaf\": container with ID starting with 58063701298bf589828142a33efbc5e270766b9014738cc0aed3ba734c80bdaf not found: ID does not exist" containerID="58063701298bf589828142a33efbc5e270766b9014738cc0aed3ba734c80bdaf" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.109344 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58063701298bf589828142a33efbc5e270766b9014738cc0aed3ba734c80bdaf"} err="failed to get container status \"58063701298bf589828142a33efbc5e270766b9014738cc0aed3ba734c80bdaf\": rpc error: code = NotFound desc = could not find container \"58063701298bf589828142a33efbc5e270766b9014738cc0aed3ba734c80bdaf\": container with ID starting with 58063701298bf589828142a33efbc5e270766b9014738cc0aed3ba734c80bdaf not found: ID does not exist" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.109383 4593 scope.go:117] "RemoveContainer" containerID="8adbf800017e7b31f1fd44ae480d6741ac17edd3f5775a9606efde18534450ba" Jan 29 11:18:42 crc kubenswrapper[4593]: E0129 11:18:42.109670 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8adbf800017e7b31f1fd44ae480d6741ac17edd3f5775a9606efde18534450ba\": container with ID starting with 8adbf800017e7b31f1fd44ae480d6741ac17edd3f5775a9606efde18534450ba not found: ID does not exist" containerID="8adbf800017e7b31f1fd44ae480d6741ac17edd3f5775a9606efde18534450ba" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.109699 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8adbf800017e7b31f1fd44ae480d6741ac17edd3f5775a9606efde18534450ba"} err="failed to get container status \"8adbf800017e7b31f1fd44ae480d6741ac17edd3f5775a9606efde18534450ba\": rpc error: code = NotFound desc = could not find container \"8adbf800017e7b31f1fd44ae480d6741ac17edd3f5775a9606efde18534450ba\": container with ID starting with 8adbf800017e7b31f1fd44ae480d6741ac17edd3f5775a9606efde18534450ba not found: ID does not exist" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.109896 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-config-data\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.117414 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-scripts\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.118725 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.120249 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.120772 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.146692 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nklq7\" (UniqueName: \"kubernetes.io/projected/37dd6241-1218-4994-9fa1-75062ec38165-kube-api-access-nklq7\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.338666 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.856745 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.867403 4593 generic.go:334] "Generic (PLEG): container finished" podID="df8e6616-b9af-427f-9daa-d62ee3cb24d3" containerID="09e3428cd83e854d7603f9f23c1fc803bfbc3479156a4044437b5fa34689606a" exitCode=0 Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.867476 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dc77db4b8-s2bq6" event={"ID":"df8e6616-b9af-427f-9daa-d62ee3cb24d3","Type":"ContainerDied","Data":"09e3428cd83e854d7603f9f23c1fc803bfbc3479156a4044437b5fa34689606a"} Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.987071 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.102788 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" path="/var/lib/kubelet/pods/78ec86eb-f94b-4f7f-83f0-30c10fd87869/volumes" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.439561 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.536288 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-httpd-config\") pod \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.536357 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-combined-ca-bundle\") pod \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.536437 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkjl6\" (UniqueName: \"kubernetes.io/projected/df8e6616-b9af-427f-9daa-d62ee3cb24d3-kube-api-access-mkjl6\") pod \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.536495 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-ovndb-tls-certs\") pod \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.536654 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-config\") pod \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.565828 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "df8e6616-b9af-427f-9daa-d62ee3cb24d3" (UID: "df8e6616-b9af-427f-9daa-d62ee3cb24d3"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.568843 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df8e6616-b9af-427f-9daa-d62ee3cb24d3-kube-api-access-mkjl6" (OuterVolumeSpecName: "kube-api-access-mkjl6") pod "df8e6616-b9af-427f-9daa-d62ee3cb24d3" (UID: "df8e6616-b9af-427f-9daa-d62ee3cb24d3"). InnerVolumeSpecName "kube-api-access-mkjl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.613079 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-config" (OuterVolumeSpecName: "config") pod "df8e6616-b9af-427f-9daa-d62ee3cb24d3" (UID: "df8e6616-b9af-427f-9daa-d62ee3cb24d3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.644004 4593 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.644338 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkjl6\" (UniqueName: \"kubernetes.io/projected/df8e6616-b9af-427f-9daa-d62ee3cb24d3-kube-api-access-mkjl6\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.644441 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.684375 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df8e6616-b9af-427f-9daa-d62ee3cb24d3" (UID: "df8e6616-b9af-427f-9daa-d62ee3cb24d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.708514 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "df8e6616-b9af-427f-9daa-d62ee3cb24d3" (UID: "df8e6616-b9af-427f-9daa-d62ee3cb24d3"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.730884 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.731021 4593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.734167 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.736693 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.736799 4593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.740847 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.746298 4593 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.746328 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.894434 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dd6241-1218-4994-9fa1-75062ec38165","Type":"ContainerStarted","Data":"b40c06d60848c18dde2f01bdab763148fbbd484c84e7f102df5e8efc825c8e5d"} Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.894503 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dd6241-1218-4994-9fa1-75062ec38165","Type":"ContainerStarted","Data":"d8c09b2b8b448508c118e29717b68e3a7cf488c8e6b3318a0fc967d165dd0e86"} Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.912119 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dc77db4b8-s2bq6" event={"ID":"df8e6616-b9af-427f-9daa-d62ee3cb24d3","Type":"ContainerDied","Data":"88035d7e970cd02ad4e71f38ef640ad02fc3f7e36a8669ad9dc26d692493f526"} Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.912171 4593 scope.go:117] "RemoveContainer" containerID="ea1f5b0da7cda5576a556da562bab910500bb22fc10f44670339b87aed033fff" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.912293 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:18:44 crc kubenswrapper[4593]: I0129 11:18:44.064287 4593 scope.go:117] "RemoveContainer" containerID="09e3428cd83e854d7603f9f23c1fc803bfbc3479156a4044437b5fa34689606a" Jan 29 11:18:44 crc kubenswrapper[4593]: I0129 11:18:44.125021 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5dc77db4b8-s2bq6"] Jan 29 11:18:44 crc kubenswrapper[4593]: I0129 11:18:44.136850 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5dc77db4b8-s2bq6"] Jan 29 11:18:44 crc kubenswrapper[4593]: I0129 11:18:44.911846 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 29 11:18:44 crc kubenswrapper[4593]: I0129 11:18:44.923786 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dd6241-1218-4994-9fa1-75062ec38165","Type":"ContainerStarted","Data":"c53181da51f450d9ff6f9c844dc483cdabc6bd935abb96bbb849906b8c60f8a1"} Jan 29 11:18:44 crc kubenswrapper[4593]: I0129 11:18:44.944897 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="c7ea14af-5b7c-44d6-a34c-1a344bfc96ef" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.174:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:18:45 crc kubenswrapper[4593]: I0129 11:18:45.050959 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5bdffb4784-5zp8q" podUID="be4a01cd-2eb7-48e8-8a7e-eb02f8851188" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 29 11:18:45 crc kubenswrapper[4593]: I0129 11:18:45.085762 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df8e6616-b9af-427f-9daa-d62ee3cb24d3" path="/var/lib/kubelet/pods/df8e6616-b9af-427f-9daa-d62ee3cb24d3/volumes" Jan 29 11:18:45 crc kubenswrapper[4593]: I0129 11:18:45.937720 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dd6241-1218-4994-9fa1-75062ec38165","Type":"ContainerStarted","Data":"87db22d6791489959e08e606893fce26ecb348d061df7a0b1bececa26e54b97e"} Jan 29 11:18:49 crc kubenswrapper[4593]: I0129 11:18:49.980405 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dd6241-1218-4994-9fa1-75062ec38165","Type":"ContainerStarted","Data":"9946bfb35dcb9ca60e203e5220d24dee1ca137e4fc677bef2b4ce91126586731"} Jan 29 11:18:49 crc kubenswrapper[4593]: I0129 11:18:49.981022 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 11:18:51 crc kubenswrapper[4593]: I0129 11:18:51.112514 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:18:51 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:18:51 crc kubenswrapper[4593]: > Jan 29 11:18:53 crc kubenswrapper[4593]: I0129 11:18:53.430955 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=6.244013573 podStartE2EDuration="12.43093335s" podCreationTimestamp="2026-01-29 11:18:41 +0000 UTC" firstStartedPulling="2026-01-29 11:18:43.000836153 +0000 UTC m=+1188.873870344" lastFinishedPulling="2026-01-29 11:18:49.18775594 +0000 UTC m=+1195.060790121" observedRunningTime="2026-01-29 11:18:50.016541125 +0000 UTC m=+1195.889575316" watchObservedRunningTime="2026-01-29 11:18:53.43093335 +0000 UTC m=+1199.303967541" Jan 29 11:18:53 crc kubenswrapper[4593]: I0129 11:18:53.437496 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:18:53 crc kubenswrapper[4593]: I0129 11:18:53.437877 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="37dd6241-1218-4994-9fa1-75062ec38165" containerName="ceilometer-central-agent" containerID="cri-o://b40c06d60848c18dde2f01bdab763148fbbd484c84e7f102df5e8efc825c8e5d" gracePeriod=30 Jan 29 11:18:53 crc kubenswrapper[4593]: I0129 11:18:53.437930 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="37dd6241-1218-4994-9fa1-75062ec38165" containerName="ceilometer-notification-agent" containerID="cri-o://c53181da51f450d9ff6f9c844dc483cdabc6bd935abb96bbb849906b8c60f8a1" gracePeriod=30 Jan 29 11:18:53 crc kubenswrapper[4593]: I0129 11:18:53.438232 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="37dd6241-1218-4994-9fa1-75062ec38165" containerName="proxy-httpd" containerID="cri-o://9946bfb35dcb9ca60e203e5220d24dee1ca137e4fc677bef2b4ce91126586731" gracePeriod=30 Jan 29 11:18:53 crc kubenswrapper[4593]: I0129 11:18:53.437924 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="37dd6241-1218-4994-9fa1-75062ec38165" containerName="sg-core" containerID="cri-o://87db22d6791489959e08e606893fce26ecb348d061df7a0b1bececa26e54b97e" gracePeriod=30 Jan 29 11:18:54 crc kubenswrapper[4593]: I0129 11:18:54.033317 4593 generic.go:334] "Generic (PLEG): container finished" podID="37dd6241-1218-4994-9fa1-75062ec38165" containerID="9946bfb35dcb9ca60e203e5220d24dee1ca137e4fc677bef2b4ce91126586731" exitCode=0 Jan 29 11:18:54 crc kubenswrapper[4593]: I0129 11:18:54.033566 4593 generic.go:334] "Generic (PLEG): container finished" podID="37dd6241-1218-4994-9fa1-75062ec38165" containerID="87db22d6791489959e08e606893fce26ecb348d061df7a0b1bececa26e54b97e" exitCode=2 Jan 29 11:18:54 crc kubenswrapper[4593]: I0129 11:18:54.033606 4593 generic.go:334] "Generic (PLEG): container finished" podID="37dd6241-1218-4994-9fa1-75062ec38165" containerID="c53181da51f450d9ff6f9c844dc483cdabc6bd935abb96bbb849906b8c60f8a1" exitCode=0 Jan 29 11:18:54 crc kubenswrapper[4593]: I0129 11:18:54.033374 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dd6241-1218-4994-9fa1-75062ec38165","Type":"ContainerDied","Data":"9946bfb35dcb9ca60e203e5220d24dee1ca137e4fc677bef2b4ce91126586731"} Jan 29 11:18:54 crc kubenswrapper[4593]: I0129 11:18:54.033658 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dd6241-1218-4994-9fa1-75062ec38165","Type":"ContainerDied","Data":"87db22d6791489959e08e606893fce26ecb348d061df7a0b1bececa26e54b97e"} Jan 29 11:18:54 crc kubenswrapper[4593]: I0129 11:18:54.033669 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dd6241-1218-4994-9fa1-75062ec38165","Type":"ContainerDied","Data":"c53181da51f450d9ff6f9c844dc483cdabc6bd935abb96bbb849906b8c60f8a1"} Jan 29 11:18:54 crc kubenswrapper[4593]: I0129 11:18:54.910516 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 29 11:18:54 crc kubenswrapper[4593]: I0129 11:18:54.910644 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:18:54 crc kubenswrapper[4593]: I0129 11:18:54.911553 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"d530af95b0eed70c00fd912ebcf7a37fa3a57fbb18ac1239a4c7320a7f27c6af"} pod="openstack/horizon-fbf566cdb-kbm9z" containerMessage="Container horizon failed startup probe, will be restarted" Jan 29 11:18:54 crc kubenswrapper[4593]: I0129 11:18:54.911604 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" containerID="cri-o://d530af95b0eed70c00fd912ebcf7a37fa3a57fbb18ac1239a4c7320a7f27c6af" gracePeriod=30 Jan 29 11:18:55 crc kubenswrapper[4593]: I0129 11:18:55.050012 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5bdffb4784-5zp8q" podUID="be4a01cd-2eb7-48e8-8a7e-eb02f8851188" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 29 11:18:55 crc kubenswrapper[4593]: I0129 11:18:55.050119 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:18:55 crc kubenswrapper[4593]: I0129 11:18:55.050981 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"b268f526e5a04b5381dd6c521b7785de6e18d74e1d8c1ba48d2b1ab6cb3e4972"} pod="openstack/horizon-5bdffb4784-5zp8q" containerMessage="Container horizon failed startup probe, will be restarted" Jan 29 11:18:55 crc kubenswrapper[4593]: I0129 11:18:55.051018 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5bdffb4784-5zp8q" podUID="be4a01cd-2eb7-48e8-8a7e-eb02f8851188" containerName="horizon" containerID="cri-o://b268f526e5a04b5381dd6c521b7785de6e18d74e1d8c1ba48d2b1ab6cb3e4972" gracePeriod=30 Jan 29 11:18:57 crc kubenswrapper[4593]: I0129 11:18:57.942969 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-86jg9"] Jan 29 11:18:57 crc kubenswrapper[4593]: E0129 11:18:57.943983 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df8e6616-b9af-427f-9daa-d62ee3cb24d3" containerName="neutron-httpd" Jan 29 11:18:57 crc kubenswrapper[4593]: I0129 11:18:57.944004 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="df8e6616-b9af-427f-9daa-d62ee3cb24d3" containerName="neutron-httpd" Jan 29 11:18:57 crc kubenswrapper[4593]: E0129 11:18:57.944020 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df8e6616-b9af-427f-9daa-d62ee3cb24d3" containerName="neutron-api" Jan 29 11:18:57 crc kubenswrapper[4593]: I0129 11:18:57.944027 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="df8e6616-b9af-427f-9daa-d62ee3cb24d3" containerName="neutron-api" Jan 29 11:18:57 crc kubenswrapper[4593]: I0129 11:18:57.944228 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="df8e6616-b9af-427f-9daa-d62ee3cb24d3" containerName="neutron-api" Jan 29 11:18:57 crc kubenswrapper[4593]: I0129 11:18:57.944255 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="df8e6616-b9af-427f-9daa-d62ee3cb24d3" containerName="neutron-httpd" Jan 29 11:18:57 crc kubenswrapper[4593]: I0129 11:18:57.944969 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-86jg9" Jan 29 11:18:57 crc kubenswrapper[4593]: I0129 11:18:57.973674 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55t5h\" (UniqueName: \"kubernetes.io/projected/afd801e2-136a-408b-a7e6-ab9a8dcfdd3b-kube-api-access-55t5h\") pod \"nova-api-db-create-86jg9\" (UID: \"afd801e2-136a-408b-a7e6-ab9a8dcfdd3b\") " pod="openstack/nova-api-db-create-86jg9" Jan 29 11:18:57 crc kubenswrapper[4593]: I0129 11:18:57.973768 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afd801e2-136a-408b-a7e6-ab9a8dcfdd3b-operator-scripts\") pod \"nova-api-db-create-86jg9\" (UID: \"afd801e2-136a-408b-a7e6-ab9a8dcfdd3b\") " pod="openstack/nova-api-db-create-86jg9" Jan 29 11:18:57 crc kubenswrapper[4593]: I0129 11:18:57.977938 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-86jg9"] Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.044333 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-vfj8w"] Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.045711 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vfj8w" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.072168 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-vfj8w"] Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.077857 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tq6w\" (UniqueName: \"kubernetes.io/projected/6b37d23e-84cc-4059-a109-18fec66cd168-kube-api-access-4tq6w\") pod \"nova-cell0-db-create-vfj8w\" (UID: \"6b37d23e-84cc-4059-a109-18fec66cd168\") " pod="openstack/nova-cell0-db-create-vfj8w" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.077976 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55t5h\" (UniqueName: \"kubernetes.io/projected/afd801e2-136a-408b-a7e6-ab9a8dcfdd3b-kube-api-access-55t5h\") pod \"nova-api-db-create-86jg9\" (UID: \"afd801e2-136a-408b-a7e6-ab9a8dcfdd3b\") " pod="openstack/nova-api-db-create-86jg9" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.078076 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afd801e2-136a-408b-a7e6-ab9a8dcfdd3b-operator-scripts\") pod \"nova-api-db-create-86jg9\" (UID: \"afd801e2-136a-408b-a7e6-ab9a8dcfdd3b\") " pod="openstack/nova-api-db-create-86jg9" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.078102 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b37d23e-84cc-4059-a109-18fec66cd168-operator-scripts\") pod \"nova-cell0-db-create-vfj8w\" (UID: \"6b37d23e-84cc-4059-a109-18fec66cd168\") " pod="openstack/nova-cell0-db-create-vfj8w" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.079231 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afd801e2-136a-408b-a7e6-ab9a8dcfdd3b-operator-scripts\") pod \"nova-api-db-create-86jg9\" (UID: \"afd801e2-136a-408b-a7e6-ab9a8dcfdd3b\") " pod="openstack/nova-api-db-create-86jg9" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.132922 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55t5h\" (UniqueName: \"kubernetes.io/projected/afd801e2-136a-408b-a7e6-ab9a8dcfdd3b-kube-api-access-55t5h\") pod \"nova-api-db-create-86jg9\" (UID: \"afd801e2-136a-408b-a7e6-ab9a8dcfdd3b\") " pod="openstack/nova-api-db-create-86jg9" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.179316 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b37d23e-84cc-4059-a109-18fec66cd168-operator-scripts\") pod \"nova-cell0-db-create-vfj8w\" (UID: \"6b37d23e-84cc-4059-a109-18fec66cd168\") " pod="openstack/nova-cell0-db-create-vfj8w" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.179411 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tq6w\" (UniqueName: \"kubernetes.io/projected/6b37d23e-84cc-4059-a109-18fec66cd168-kube-api-access-4tq6w\") pod \"nova-cell0-db-create-vfj8w\" (UID: \"6b37d23e-84cc-4059-a109-18fec66cd168\") " pod="openstack/nova-cell0-db-create-vfj8w" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.180188 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b37d23e-84cc-4059-a109-18fec66cd168-operator-scripts\") pod \"nova-cell0-db-create-vfj8w\" (UID: \"6b37d23e-84cc-4059-a109-18fec66cd168\") " pod="openstack/nova-cell0-db-create-vfj8w" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.237422 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tq6w\" (UniqueName: \"kubernetes.io/projected/6b37d23e-84cc-4059-a109-18fec66cd168-kube-api-access-4tq6w\") pod \"nova-cell0-db-create-vfj8w\" (UID: \"6b37d23e-84cc-4059-a109-18fec66cd168\") " pod="openstack/nova-cell0-db-create-vfj8w" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.243741 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-vpcpg"] Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.245101 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vpcpg" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.286853 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-02db-account-create-update-8h7xj"] Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.288603 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-02db-account-create-update-8h7xj" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.290465 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-86jg9" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.290897 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.355950 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-02db-account-create-update-8h7xj"] Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.378502 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vfj8w" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.408668 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5349ab78-1643-47e8-bfca-20d31e2f459f-operator-scripts\") pod \"nova-cell1-db-create-vpcpg\" (UID: \"5349ab78-1643-47e8-bfca-20d31e2f459f\") " pod="openstack/nova-cell1-db-create-vpcpg" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.408797 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w458\" (UniqueName: \"kubernetes.io/projected/3cc0715e-34d0-4d5e-a8cc-5809adc6e264-kube-api-access-5w458\") pod \"nova-api-02db-account-create-update-8h7xj\" (UID: \"3cc0715e-34d0-4d5e-a8cc-5809adc6e264\") " pod="openstack/nova-api-02db-account-create-update-8h7xj" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.408847 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdqhl\" (UniqueName: \"kubernetes.io/projected/5349ab78-1643-47e8-bfca-20d31e2f459f-kube-api-access-cdqhl\") pod \"nova-cell1-db-create-vpcpg\" (UID: \"5349ab78-1643-47e8-bfca-20d31e2f459f\") " pod="openstack/nova-cell1-db-create-vpcpg" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.408907 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cc0715e-34d0-4d5e-a8cc-5809adc6e264-operator-scripts\") pod \"nova-api-02db-account-create-update-8h7xj\" (UID: \"3cc0715e-34d0-4d5e-a8cc-5809adc6e264\") " pod="openstack/nova-api-02db-account-create-update-8h7xj" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.428222 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-vpcpg"] Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.510947 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w458\" (UniqueName: \"kubernetes.io/projected/3cc0715e-34d0-4d5e-a8cc-5809adc6e264-kube-api-access-5w458\") pod \"nova-api-02db-account-create-update-8h7xj\" (UID: \"3cc0715e-34d0-4d5e-a8cc-5809adc6e264\") " pod="openstack/nova-api-02db-account-create-update-8h7xj" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.511043 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdqhl\" (UniqueName: \"kubernetes.io/projected/5349ab78-1643-47e8-bfca-20d31e2f459f-kube-api-access-cdqhl\") pod \"nova-cell1-db-create-vpcpg\" (UID: \"5349ab78-1643-47e8-bfca-20d31e2f459f\") " pod="openstack/nova-cell1-db-create-vpcpg" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.511127 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cc0715e-34d0-4d5e-a8cc-5809adc6e264-operator-scripts\") pod \"nova-api-02db-account-create-update-8h7xj\" (UID: \"3cc0715e-34d0-4d5e-a8cc-5809adc6e264\") " pod="openstack/nova-api-02db-account-create-update-8h7xj" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.511162 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5349ab78-1643-47e8-bfca-20d31e2f459f-operator-scripts\") pod \"nova-cell1-db-create-vpcpg\" (UID: \"5349ab78-1643-47e8-bfca-20d31e2f459f\") " pod="openstack/nova-cell1-db-create-vpcpg" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.512245 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cc0715e-34d0-4d5e-a8cc-5809adc6e264-operator-scripts\") pod \"nova-api-02db-account-create-update-8h7xj\" (UID: \"3cc0715e-34d0-4d5e-a8cc-5809adc6e264\") " pod="openstack/nova-api-02db-account-create-update-8h7xj" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.518045 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5349ab78-1643-47e8-bfca-20d31e2f459f-operator-scripts\") pod \"nova-cell1-db-create-vpcpg\" (UID: \"5349ab78-1643-47e8-bfca-20d31e2f459f\") " pod="openstack/nova-cell1-db-create-vpcpg" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.538807 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-bbb2-account-create-update-nq54g"] Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.540246 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-bbb2-account-create-update-nq54g" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.547133 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.561288 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w458\" (UniqueName: \"kubernetes.io/projected/3cc0715e-34d0-4d5e-a8cc-5809adc6e264-kube-api-access-5w458\") pod \"nova-api-02db-account-create-update-8h7xj\" (UID: \"3cc0715e-34d0-4d5e-a8cc-5809adc6e264\") " pod="openstack/nova-api-02db-account-create-update-8h7xj" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.582904 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdqhl\" (UniqueName: \"kubernetes.io/projected/5349ab78-1643-47e8-bfca-20d31e2f459f-kube-api-access-cdqhl\") pod \"nova-cell1-db-create-vpcpg\" (UID: \"5349ab78-1643-47e8-bfca-20d31e2f459f\") " pod="openstack/nova-cell1-db-create-vpcpg" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.593539 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-bbb2-account-create-update-nq54g"] Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.645821 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vpcpg" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.658094 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-02db-account-create-update-8h7xj" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.663275 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-207d-account-create-update-n289g"] Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.664754 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-207d-account-create-update-n289g" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.669392 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.718413 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-207d-account-create-update-n289g"] Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.734840 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p49g4\" (UniqueName: \"kubernetes.io/projected/8c560b58-f036-4946-aca6-d59c9502954e-kube-api-access-p49g4\") pod \"nova-cell0-bbb2-account-create-update-nq54g\" (UID: \"8c560b58-f036-4946-aca6-d59c9502954e\") " pod="openstack/nova-cell0-bbb2-account-create-update-nq54g" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.734916 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c560b58-f036-4946-aca6-d59c9502954e-operator-scripts\") pod \"nova-cell0-bbb2-account-create-update-nq54g\" (UID: \"8c560b58-f036-4946-aca6-d59c9502954e\") " pod="openstack/nova-cell0-bbb2-account-create-update-nq54g" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.841048 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w8pk\" (UniqueName: \"kubernetes.io/projected/d60bb61f-5204-4149-9922-70c6b0916c48-kube-api-access-8w8pk\") pod \"nova-cell1-207d-account-create-update-n289g\" (UID: \"d60bb61f-5204-4149-9922-70c6b0916c48\") " pod="openstack/nova-cell1-207d-account-create-update-n289g" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.841444 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p49g4\" (UniqueName: \"kubernetes.io/projected/8c560b58-f036-4946-aca6-d59c9502954e-kube-api-access-p49g4\") pod \"nova-cell0-bbb2-account-create-update-nq54g\" (UID: \"8c560b58-f036-4946-aca6-d59c9502954e\") " pod="openstack/nova-cell0-bbb2-account-create-update-nq54g" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.841484 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c560b58-f036-4946-aca6-d59c9502954e-operator-scripts\") pod \"nova-cell0-bbb2-account-create-update-nq54g\" (UID: \"8c560b58-f036-4946-aca6-d59c9502954e\") " pod="openstack/nova-cell0-bbb2-account-create-update-nq54g" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.842330 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d60bb61f-5204-4149-9922-70c6b0916c48-operator-scripts\") pod \"nova-cell1-207d-account-create-update-n289g\" (UID: \"d60bb61f-5204-4149-9922-70c6b0916c48\") " pod="openstack/nova-cell1-207d-account-create-update-n289g" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.843592 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c560b58-f036-4946-aca6-d59c9502954e-operator-scripts\") pod \"nova-cell0-bbb2-account-create-update-nq54g\" (UID: \"8c560b58-f036-4946-aca6-d59c9502954e\") " pod="openstack/nova-cell0-bbb2-account-create-update-nq54g" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.862087 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p49g4\" (UniqueName: \"kubernetes.io/projected/8c560b58-f036-4946-aca6-d59c9502954e-kube-api-access-p49g4\") pod \"nova-cell0-bbb2-account-create-update-nq54g\" (UID: \"8c560b58-f036-4946-aca6-d59c9502954e\") " pod="openstack/nova-cell0-bbb2-account-create-update-nq54g" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.941609 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-bbb2-account-create-update-nq54g" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.943351 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w8pk\" (UniqueName: \"kubernetes.io/projected/d60bb61f-5204-4149-9922-70c6b0916c48-kube-api-access-8w8pk\") pod \"nova-cell1-207d-account-create-update-n289g\" (UID: \"d60bb61f-5204-4149-9922-70c6b0916c48\") " pod="openstack/nova-cell1-207d-account-create-update-n289g" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.943458 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d60bb61f-5204-4149-9922-70c6b0916c48-operator-scripts\") pod \"nova-cell1-207d-account-create-update-n289g\" (UID: \"d60bb61f-5204-4149-9922-70c6b0916c48\") " pod="openstack/nova-cell1-207d-account-create-update-n289g" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.944006 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d60bb61f-5204-4149-9922-70c6b0916c48-operator-scripts\") pod \"nova-cell1-207d-account-create-update-n289g\" (UID: \"d60bb61f-5204-4149-9922-70c6b0916c48\") " pod="openstack/nova-cell1-207d-account-create-update-n289g" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.973846 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w8pk\" (UniqueName: \"kubernetes.io/projected/d60bb61f-5204-4149-9922-70c6b0916c48-kube-api-access-8w8pk\") pod \"nova-cell1-207d-account-create-update-n289g\" (UID: \"d60bb61f-5204-4149-9922-70c6b0916c48\") " pod="openstack/nova-cell1-207d-account-create-update-n289g" Jan 29 11:18:59 crc kubenswrapper[4593]: I0129 11:18:59.011437 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-207d-account-create-update-n289g" Jan 29 11:18:59 crc kubenswrapper[4593]: I0129 11:18:59.766806 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="c1755998-9149-49be-b10f-c4fe029728bc" containerName="galera" probeResult="failure" output="command timed out" Jan 29 11:18:59 crc kubenswrapper[4593]: I0129 11:18:59.766884 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="c1755998-9149-49be-b10f-c4fe029728bc" containerName="galera" probeResult="failure" output="command timed out" Jan 29 11:18:59 crc kubenswrapper[4593]: I0129 11:18:59.859770 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-02db-account-create-update-8h7xj"] Jan 29 11:18:59 crc kubenswrapper[4593]: W0129 11:18:59.876890 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3cc0715e_34d0_4d5e_a8cc_5809adc6e264.slice/crio-ccff10d99b931ba016c401ec01d3d8c3eb26cc68525d8b0b87a53722c22d6da6 WatchSource:0}: Error finding container ccff10d99b931ba016c401ec01d3d8c3eb26cc68525d8b0b87a53722c22d6da6: Status 404 returned error can't find the container with id ccff10d99b931ba016c401ec01d3d8c3eb26cc68525d8b0b87a53722c22d6da6 Jan 29 11:18:59 crc kubenswrapper[4593]: I0129 11:18:59.975168 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-vfj8w"] Jan 29 11:18:59 crc kubenswrapper[4593]: I0129 11:18:59.994202 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-bbb2-account-create-update-nq54g"] Jan 29 11:19:00 crc kubenswrapper[4593]: I0129 11:19:00.039708 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-vpcpg"] Jan 29 11:19:00 crc kubenswrapper[4593]: I0129 11:19:00.059481 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-86jg9"] Jan 29 11:19:00 crc kubenswrapper[4593]: I0129 11:19:00.101147 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-207d-account-create-update-n289g"] Jan 29 11:19:00 crc kubenswrapper[4593]: I0129 11:19:00.128188 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-02db-account-create-update-8h7xj" event={"ID":"3cc0715e-34d0-4d5e-a8cc-5809adc6e264","Type":"ContainerStarted","Data":"ccff10d99b931ba016c401ec01d3d8c3eb26cc68525d8b0b87a53722c22d6da6"} Jan 29 11:19:00 crc kubenswrapper[4593]: W0129 11:19:00.173938 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b37d23e_84cc_4059_a109_18fec66cd168.slice/crio-d99f9e630d45e320654c3f4cd99cdd7630a876ce3d253088b42c2dc2c0673ebd WatchSource:0}: Error finding container d99f9e630d45e320654c3f4cd99cdd7630a876ce3d253088b42c2dc2c0673ebd: Status 404 returned error can't find the container with id d99f9e630d45e320654c3f4cd99cdd7630a876ce3d253088b42c2dc2c0673ebd Jan 29 11:19:00 crc kubenswrapper[4593]: W0129 11:19:00.177371 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5349ab78_1643_47e8_bfca_20d31e2f459f.slice/crio-bd6395432e815ad482ae23d26fcefd4354ba895cd6f4b3d24eeef8500addb7bc WatchSource:0}: Error finding container bd6395432e815ad482ae23d26fcefd4354ba895cd6f4b3d24eeef8500addb7bc: Status 404 returned error can't find the container with id bd6395432e815ad482ae23d26fcefd4354ba895cd6f4b3d24eeef8500addb7bc Jan 29 11:19:00 crc kubenswrapper[4593]: W0129 11:19:00.178582 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafd801e2_136a_408b_a7e6_ab9a8dcfdd3b.slice/crio-dabc124961373c0032688619d04b12e629b17e0138d3ed3295d1102ce1345dca WatchSource:0}: Error finding container dabc124961373c0032688619d04b12e629b17e0138d3ed3295d1102ce1345dca: Status 404 returned error can't find the container with id dabc124961373c0032688619d04b12e629b17e0138d3ed3295d1102ce1345dca Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.120891 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:19:01 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:19:01 crc kubenswrapper[4593]: > Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.138829 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-86jg9" event={"ID":"afd801e2-136a-408b-a7e6-ab9a8dcfdd3b","Type":"ContainerStarted","Data":"b4acf56e0984e495aea7b87f5e09b414ac2d3ef8fb7a27a8f9cffdcbe98b5b8c"} Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.138870 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-86jg9" event={"ID":"afd801e2-136a-408b-a7e6-ab9a8dcfdd3b","Type":"ContainerStarted","Data":"dabc124961373c0032688619d04b12e629b17e0138d3ed3295d1102ce1345dca"} Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.140550 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-02db-account-create-update-8h7xj" event={"ID":"3cc0715e-34d0-4d5e-a8cc-5809adc6e264","Type":"ContainerStarted","Data":"6c0216f7cb045c8475f6c48e3f50c549e3404a77f63e6ee461ea5240850a1620"} Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.158395 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vfj8w" event={"ID":"6b37d23e-84cc-4059-a109-18fec66cd168","Type":"ContainerStarted","Data":"97bad51c47183a029a20953701c3f31d5be0e445cb1a365cf05eca76d77d4eb6"} Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.158466 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vfj8w" event={"ID":"6b37d23e-84cc-4059-a109-18fec66cd168","Type":"ContainerStarted","Data":"d99f9e630d45e320654c3f4cd99cdd7630a876ce3d253088b42c2dc2c0673ebd"} Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.173525 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-86jg9" podStartSLOduration=4.173502924 podStartE2EDuration="4.173502924s" podCreationTimestamp="2026-01-29 11:18:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:19:01.172171838 +0000 UTC m=+1207.045206029" watchObservedRunningTime="2026-01-29 11:19:01.173502924 +0000 UTC m=+1207.046537115" Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.184876 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-207d-account-create-update-n289g" event={"ID":"d60bb61f-5204-4149-9922-70c6b0916c48","Type":"ContainerStarted","Data":"4617f4b77856e9af93c03f010b2af2c31551118ca1d06a956c46e256c4dacc4c"} Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.185121 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-207d-account-create-update-n289g" event={"ID":"d60bb61f-5204-4149-9922-70c6b0916c48","Type":"ContainerStarted","Data":"b589e21f0266150b72b75e48575c70865e45ffe8e3a984bb6e0a7d1e0ce27721"} Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.190970 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-bbb2-account-create-update-nq54g" event={"ID":"8c560b58-f036-4946-aca6-d59c9502954e","Type":"ContainerStarted","Data":"9d911603c45f632b1589627458c99f256ab970b9f33d34d26ebd6abdb5c39ade"} Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.192314 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-bbb2-account-create-update-nq54g" event={"ID":"8c560b58-f036-4946-aca6-d59c9502954e","Type":"ContainerStarted","Data":"44d5e9852fdbff2c2f57298b319bc2aac423abcdb37ecfe12370febe05fe491f"} Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.205667 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-vfj8w" podStartSLOduration=3.205650435 podStartE2EDuration="3.205650435s" podCreationTimestamp="2026-01-29 11:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:19:01.204147704 +0000 UTC m=+1207.077181895" watchObservedRunningTime="2026-01-29 11:19:01.205650435 +0000 UTC m=+1207.078684626" Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.207107 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vpcpg" event={"ID":"5349ab78-1643-47e8-bfca-20d31e2f459f","Type":"ContainerStarted","Data":"690f9e7a9c00c85e345179d71bb55173000c29b38e2987305e760408ff69f398"} Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.207161 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vpcpg" event={"ID":"5349ab78-1643-47e8-bfca-20d31e2f459f","Type":"ContainerStarted","Data":"bd6395432e815ad482ae23d26fcefd4354ba895cd6f4b3d24eeef8500addb7bc"} Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.228296 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-02db-account-create-update-8h7xj" podStartSLOduration=3.228279288 podStartE2EDuration="3.228279288s" podCreationTimestamp="2026-01-29 11:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:19:01.226562421 +0000 UTC m=+1207.099596602" watchObservedRunningTime="2026-01-29 11:19:01.228279288 +0000 UTC m=+1207.101313479" Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.249416 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-207d-account-create-update-n289g" podStartSLOduration=3.2493964 podStartE2EDuration="3.2493964s" podCreationTimestamp="2026-01-29 11:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:19:01.244177208 +0000 UTC m=+1207.117211389" watchObservedRunningTime="2026-01-29 11:19:01.2493964 +0000 UTC m=+1207.122430591" Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.272448 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-bbb2-account-create-update-nq54g" podStartSLOduration=3.272422923 podStartE2EDuration="3.272422923s" podCreationTimestamp="2026-01-29 11:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:19:01.265608078 +0000 UTC m=+1207.138642279" watchObservedRunningTime="2026-01-29 11:19:01.272422923 +0000 UTC m=+1207.145457124" Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.292154 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-vpcpg" podStartSLOduration=3.292134647 podStartE2EDuration="3.292134647s" podCreationTimestamp="2026-01-29 11:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:19:01.283679258 +0000 UTC m=+1207.156713449" watchObservedRunningTime="2026-01-29 11:19:01.292134647 +0000 UTC m=+1207.165168838" Jan 29 11:19:02 crc kubenswrapper[4593]: I0129 11:19:02.229020 4593 generic.go:334] "Generic (PLEG): container finished" podID="6b37d23e-84cc-4059-a109-18fec66cd168" containerID="97bad51c47183a029a20953701c3f31d5be0e445cb1a365cf05eca76d77d4eb6" exitCode=0 Jan 29 11:19:02 crc kubenswrapper[4593]: I0129 11:19:02.229358 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vfj8w" event={"ID":"6b37d23e-84cc-4059-a109-18fec66cd168","Type":"ContainerDied","Data":"97bad51c47183a029a20953701c3f31d5be0e445cb1a365cf05eca76d77d4eb6"} Jan 29 11:19:02 crc kubenswrapper[4593]: I0129 11:19:02.233791 4593 generic.go:334] "Generic (PLEG): container finished" podID="d60bb61f-5204-4149-9922-70c6b0916c48" containerID="4617f4b77856e9af93c03f010b2af2c31551118ca1d06a956c46e256c4dacc4c" exitCode=0 Jan 29 11:19:02 crc kubenswrapper[4593]: I0129 11:19:02.233853 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-207d-account-create-update-n289g" event={"ID":"d60bb61f-5204-4149-9922-70c6b0916c48","Type":"ContainerDied","Data":"4617f4b77856e9af93c03f010b2af2c31551118ca1d06a956c46e256c4dacc4c"} Jan 29 11:19:02 crc kubenswrapper[4593]: I0129 11:19:02.235807 4593 generic.go:334] "Generic (PLEG): container finished" podID="8c560b58-f036-4946-aca6-d59c9502954e" containerID="9d911603c45f632b1589627458c99f256ab970b9f33d34d26ebd6abdb5c39ade" exitCode=0 Jan 29 11:19:02 crc kubenswrapper[4593]: I0129 11:19:02.235857 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-bbb2-account-create-update-nq54g" event={"ID":"8c560b58-f036-4946-aca6-d59c9502954e","Type":"ContainerDied","Data":"9d911603c45f632b1589627458c99f256ab970b9f33d34d26ebd6abdb5c39ade"} Jan 29 11:19:02 crc kubenswrapper[4593]: I0129 11:19:02.247475 4593 generic.go:334] "Generic (PLEG): container finished" podID="5349ab78-1643-47e8-bfca-20d31e2f459f" containerID="690f9e7a9c00c85e345179d71bb55173000c29b38e2987305e760408ff69f398" exitCode=0 Jan 29 11:19:02 crc kubenswrapper[4593]: I0129 11:19:02.247542 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vpcpg" event={"ID":"5349ab78-1643-47e8-bfca-20d31e2f459f","Type":"ContainerDied","Data":"690f9e7a9c00c85e345179d71bb55173000c29b38e2987305e760408ff69f398"} Jan 29 11:19:02 crc kubenswrapper[4593]: I0129 11:19:02.249071 4593 generic.go:334] "Generic (PLEG): container finished" podID="afd801e2-136a-408b-a7e6-ab9a8dcfdd3b" containerID="b4acf56e0984e495aea7b87f5e09b414ac2d3ef8fb7a27a8f9cffdcbe98b5b8c" exitCode=0 Jan 29 11:19:02 crc kubenswrapper[4593]: I0129 11:19:02.249107 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-86jg9" event={"ID":"afd801e2-136a-408b-a7e6-ab9a8dcfdd3b","Type":"ContainerDied","Data":"b4acf56e0984e495aea7b87f5e09b414ac2d3ef8fb7a27a8f9cffdcbe98b5b8c"} Jan 29 11:19:02 crc kubenswrapper[4593]: I0129 11:19:02.256413 4593 generic.go:334] "Generic (PLEG): container finished" podID="3cc0715e-34d0-4d5e-a8cc-5809adc6e264" containerID="6c0216f7cb045c8475f6c48e3f50c549e3404a77f63e6ee461ea5240850a1620" exitCode=0 Jan 29 11:19:02 crc kubenswrapper[4593]: I0129 11:19:02.256466 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-02db-account-create-update-8h7xj" event={"ID":"3cc0715e-34d0-4d5e-a8cc-5809adc6e264","Type":"ContainerDied","Data":"6c0216f7cb045c8475f6c48e3f50c549e3404a77f63e6ee461ea5240850a1620"} Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.269421 4593 generic.go:334] "Generic (PLEG): container finished" podID="37dd6241-1218-4994-9fa1-75062ec38165" containerID="b40c06d60848c18dde2f01bdab763148fbbd484c84e7f102df5e8efc825c8e5d" exitCode=0 Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.269520 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dd6241-1218-4994-9fa1-75062ec38165","Type":"ContainerDied","Data":"b40c06d60848c18dde2f01bdab763148fbbd484c84e7f102df5e8efc825c8e5d"} Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.270011 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dd6241-1218-4994-9fa1-75062ec38165","Type":"ContainerDied","Data":"d8c09b2b8b448508c118e29717b68e3a7cf488c8e6b3318a0fc967d165dd0e86"} Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.270033 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8c09b2b8b448508c118e29717b68e3a7cf488c8e6b3318a0fc967d165dd0e86" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.299003 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.353745 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-scripts\") pod \"37dd6241-1218-4994-9fa1-75062ec38165\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.354079 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-ceilometer-tls-certs\") pod \"37dd6241-1218-4994-9fa1-75062ec38165\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.354162 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nklq7\" (UniqueName: \"kubernetes.io/projected/37dd6241-1218-4994-9fa1-75062ec38165-kube-api-access-nklq7\") pod \"37dd6241-1218-4994-9fa1-75062ec38165\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.354262 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-combined-ca-bundle\") pod \"37dd6241-1218-4994-9fa1-75062ec38165\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.354429 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-sg-core-conf-yaml\") pod \"37dd6241-1218-4994-9fa1-75062ec38165\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.354574 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-config-data\") pod \"37dd6241-1218-4994-9fa1-75062ec38165\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.354784 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dd6241-1218-4994-9fa1-75062ec38165-log-httpd\") pod \"37dd6241-1218-4994-9fa1-75062ec38165\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.354910 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dd6241-1218-4994-9fa1-75062ec38165-run-httpd\") pod \"37dd6241-1218-4994-9fa1-75062ec38165\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.355861 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37dd6241-1218-4994-9fa1-75062ec38165-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "37dd6241-1218-4994-9fa1-75062ec38165" (UID: "37dd6241-1218-4994-9fa1-75062ec38165"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.368413 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37dd6241-1218-4994-9fa1-75062ec38165-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "37dd6241-1218-4994-9fa1-75062ec38165" (UID: "37dd6241-1218-4994-9fa1-75062ec38165"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.386925 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37dd6241-1218-4994-9fa1-75062ec38165-kube-api-access-nklq7" (OuterVolumeSpecName: "kube-api-access-nklq7") pod "37dd6241-1218-4994-9fa1-75062ec38165" (UID: "37dd6241-1218-4994-9fa1-75062ec38165"). InnerVolumeSpecName "kube-api-access-nklq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.444831 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-scripts" (OuterVolumeSpecName: "scripts") pod "37dd6241-1218-4994-9fa1-75062ec38165" (UID: "37dd6241-1218-4994-9fa1-75062ec38165"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.458245 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nklq7\" (UniqueName: \"kubernetes.io/projected/37dd6241-1218-4994-9fa1-75062ec38165-kube-api-access-nklq7\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.459512 4593 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dd6241-1218-4994-9fa1-75062ec38165-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.459668 4593 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dd6241-1218-4994-9fa1-75062ec38165-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.459816 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.521380 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "37dd6241-1218-4994-9fa1-75062ec38165" (UID: "37dd6241-1218-4994-9fa1-75062ec38165"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.568229 4593 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.597775 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "37dd6241-1218-4994-9fa1-75062ec38165" (UID: "37dd6241-1218-4994-9fa1-75062ec38165"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.658838 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37dd6241-1218-4994-9fa1-75062ec38165" (UID: "37dd6241-1218-4994-9fa1-75062ec38165"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.672282 4593 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.672316 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.682419 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-config-data" (OuterVolumeSpecName: "config-data") pod "37dd6241-1218-4994-9fa1-75062ec38165" (UID: "37dd6241-1218-4994-9fa1-75062ec38165"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.777061 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.947710 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.947791 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.024991 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-02db-account-create-update-8h7xj" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.094195 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w458\" (UniqueName: \"kubernetes.io/projected/3cc0715e-34d0-4d5e-a8cc-5809adc6e264-kube-api-access-5w458\") pod \"3cc0715e-34d0-4d5e-a8cc-5809adc6e264\" (UID: \"3cc0715e-34d0-4d5e-a8cc-5809adc6e264\") " Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.094515 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cc0715e-34d0-4d5e-a8cc-5809adc6e264-operator-scripts\") pod \"3cc0715e-34d0-4d5e-a8cc-5809adc6e264\" (UID: \"3cc0715e-34d0-4d5e-a8cc-5809adc6e264\") " Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.096845 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cc0715e-34d0-4d5e-a8cc-5809adc6e264-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3cc0715e-34d0-4d5e-a8cc-5809adc6e264" (UID: "3cc0715e-34d0-4d5e-a8cc-5809adc6e264"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.147998 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cc0715e-34d0-4d5e-a8cc-5809adc6e264-kube-api-access-5w458" (OuterVolumeSpecName: "kube-api-access-5w458") pod "3cc0715e-34d0-4d5e-a8cc-5809adc6e264" (UID: "3cc0715e-34d0-4d5e-a8cc-5809adc6e264"). InnerVolumeSpecName "kube-api-access-5w458". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.202362 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cc0715e-34d0-4d5e-a8cc-5809adc6e264-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.202403 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5w458\" (UniqueName: \"kubernetes.io/projected/3cc0715e-34d0-4d5e-a8cc-5809adc6e264-kube-api-access-5w458\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.243071 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vfj8w" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.264252 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-86jg9" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.283620 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vpcpg" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.292939 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vpcpg" event={"ID":"5349ab78-1643-47e8-bfca-20d31e2f459f","Type":"ContainerDied","Data":"bd6395432e815ad482ae23d26fcefd4354ba895cd6f4b3d24eeef8500addb7bc"} Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.292992 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd6395432e815ad482ae23d26fcefd4354ba895cd6f4b3d24eeef8500addb7bc" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.293075 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vpcpg" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.294952 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-02db-account-create-update-8h7xj" event={"ID":"3cc0715e-34d0-4d5e-a8cc-5809adc6e264","Type":"ContainerDied","Data":"ccff10d99b931ba016c401ec01d3d8c3eb26cc68525d8b0b87a53722c22d6da6"} Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.294999 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccff10d99b931ba016c401ec01d3d8c3eb26cc68525d8b0b87a53722c22d6da6" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.295039 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-02db-account-create-update-8h7xj" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.303377 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b37d23e-84cc-4059-a109-18fec66cd168-operator-scripts\") pod \"6b37d23e-84cc-4059-a109-18fec66cd168\" (UID: \"6b37d23e-84cc-4059-a109-18fec66cd168\") " Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.303608 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tq6w\" (UniqueName: \"kubernetes.io/projected/6b37d23e-84cc-4059-a109-18fec66cd168-kube-api-access-4tq6w\") pod \"6b37d23e-84cc-4059-a109-18fec66cd168\" (UID: \"6b37d23e-84cc-4059-a109-18fec66cd168\") " Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.303918 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b37d23e-84cc-4059-a109-18fec66cd168-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6b37d23e-84cc-4059-a109-18fec66cd168" (UID: "6b37d23e-84cc-4059-a109-18fec66cd168"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.303978 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vfj8w" event={"ID":"6b37d23e-84cc-4059-a109-18fec66cd168","Type":"ContainerDied","Data":"d99f9e630d45e320654c3f4cd99cdd7630a876ce3d253088b42c2dc2c0673ebd"} Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.304848 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d99f9e630d45e320654c3f4cd99cdd7630a876ce3d253088b42c2dc2c0673ebd" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.304070 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vfj8w" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.305382 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b37d23e-84cc-4059-a109-18fec66cd168-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.317084 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b37d23e-84cc-4059-a109-18fec66cd168-kube-api-access-4tq6w" (OuterVolumeSpecName: "kube-api-access-4tq6w") pod "6b37d23e-84cc-4059-a109-18fec66cd168" (UID: "6b37d23e-84cc-4059-a109-18fec66cd168"). InnerVolumeSpecName "kube-api-access-4tq6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.319888 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.323904 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-86jg9" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.324254 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-86jg9" event={"ID":"afd801e2-136a-408b-a7e6-ab9a8dcfdd3b","Type":"ContainerDied","Data":"dabc124961373c0032688619d04b12e629b17e0138d3ed3295d1102ce1345dca"} Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.324339 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dabc124961373c0032688619d04b12e629b17e0138d3ed3295d1102ce1345dca" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.410496 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdqhl\" (UniqueName: \"kubernetes.io/projected/5349ab78-1643-47e8-bfca-20d31e2f459f-kube-api-access-cdqhl\") pod \"5349ab78-1643-47e8-bfca-20d31e2f459f\" (UID: \"5349ab78-1643-47e8-bfca-20d31e2f459f\") " Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.410721 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55t5h\" (UniqueName: \"kubernetes.io/projected/afd801e2-136a-408b-a7e6-ab9a8dcfdd3b-kube-api-access-55t5h\") pod \"afd801e2-136a-408b-a7e6-ab9a8dcfdd3b\" (UID: \"afd801e2-136a-408b-a7e6-ab9a8dcfdd3b\") " Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.410777 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5349ab78-1643-47e8-bfca-20d31e2f459f-operator-scripts\") pod \"5349ab78-1643-47e8-bfca-20d31e2f459f\" (UID: \"5349ab78-1643-47e8-bfca-20d31e2f459f\") " Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.410857 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afd801e2-136a-408b-a7e6-ab9a8dcfdd3b-operator-scripts\") pod \"afd801e2-136a-408b-a7e6-ab9a8dcfdd3b\" (UID: \"afd801e2-136a-408b-a7e6-ab9a8dcfdd3b\") " Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.412317 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tq6w\" (UniqueName: \"kubernetes.io/projected/6b37d23e-84cc-4059-a109-18fec66cd168-kube-api-access-4tq6w\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.417766 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afd801e2-136a-408b-a7e6-ab9a8dcfdd3b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "afd801e2-136a-408b-a7e6-ab9a8dcfdd3b" (UID: "afd801e2-136a-408b-a7e6-ab9a8dcfdd3b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.421352 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5349ab78-1643-47e8-bfca-20d31e2f459f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5349ab78-1643-47e8-bfca-20d31e2f459f" (UID: "5349ab78-1643-47e8-bfca-20d31e2f459f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.423349 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5349ab78-1643-47e8-bfca-20d31e2f459f-kube-api-access-cdqhl" (OuterVolumeSpecName: "kube-api-access-cdqhl") pod "5349ab78-1643-47e8-bfca-20d31e2f459f" (UID: "5349ab78-1643-47e8-bfca-20d31e2f459f"). InnerVolumeSpecName "kube-api-access-cdqhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.440957 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afd801e2-136a-408b-a7e6-ab9a8dcfdd3b-kube-api-access-55t5h" (OuterVolumeSpecName: "kube-api-access-55t5h") pod "afd801e2-136a-408b-a7e6-ab9a8dcfdd3b" (UID: "afd801e2-136a-408b-a7e6-ab9a8dcfdd3b"). InnerVolumeSpecName "kube-api-access-55t5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.503231 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-bbb2-account-create-update-nq54g" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.509260 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.514051 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdqhl\" (UniqueName: \"kubernetes.io/projected/5349ab78-1643-47e8-bfca-20d31e2f459f-kube-api-access-cdqhl\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.514301 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55t5h\" (UniqueName: \"kubernetes.io/projected/afd801e2-136a-408b-a7e6-ab9a8dcfdd3b-kube-api-access-55t5h\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.514380 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5349ab78-1643-47e8-bfca-20d31e2f459f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.514482 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afd801e2-136a-408b-a7e6-ab9a8dcfdd3b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.540773 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.540933 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:04 crc kubenswrapper[4593]: E0129 11:19:04.541357 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37dd6241-1218-4994-9fa1-75062ec38165" containerName="proxy-httpd" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541374 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="37dd6241-1218-4994-9fa1-75062ec38165" containerName="proxy-httpd" Jan 29 11:19:04 crc kubenswrapper[4593]: E0129 11:19:04.541393 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5349ab78-1643-47e8-bfca-20d31e2f459f" containerName="mariadb-database-create" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541400 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="5349ab78-1643-47e8-bfca-20d31e2f459f" containerName="mariadb-database-create" Jan 29 11:19:04 crc kubenswrapper[4593]: E0129 11:19:04.541413 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afd801e2-136a-408b-a7e6-ab9a8dcfdd3b" containerName="mariadb-database-create" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541420 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="afd801e2-136a-408b-a7e6-ab9a8dcfdd3b" containerName="mariadb-database-create" Jan 29 11:19:04 crc kubenswrapper[4593]: E0129 11:19:04.541439 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37dd6241-1218-4994-9fa1-75062ec38165" containerName="ceilometer-notification-agent" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541446 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="37dd6241-1218-4994-9fa1-75062ec38165" containerName="ceilometer-notification-agent" Jan 29 11:19:04 crc kubenswrapper[4593]: E0129 11:19:04.541461 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37dd6241-1218-4994-9fa1-75062ec38165" containerName="sg-core" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541469 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="37dd6241-1218-4994-9fa1-75062ec38165" containerName="sg-core" Jan 29 11:19:04 crc kubenswrapper[4593]: E0129 11:19:04.541481 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c560b58-f036-4946-aca6-d59c9502954e" containerName="mariadb-account-create-update" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541487 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c560b58-f036-4946-aca6-d59c9502954e" containerName="mariadb-account-create-update" Jan 29 11:19:04 crc kubenswrapper[4593]: E0129 11:19:04.541501 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b37d23e-84cc-4059-a109-18fec66cd168" containerName="mariadb-database-create" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541508 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b37d23e-84cc-4059-a109-18fec66cd168" containerName="mariadb-database-create" Jan 29 11:19:04 crc kubenswrapper[4593]: E0129 11:19:04.541535 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc0715e-34d0-4d5e-a8cc-5809adc6e264" containerName="mariadb-account-create-update" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541542 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc0715e-34d0-4d5e-a8cc-5809adc6e264" containerName="mariadb-account-create-update" Jan 29 11:19:04 crc kubenswrapper[4593]: E0129 11:19:04.541556 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37dd6241-1218-4994-9fa1-75062ec38165" containerName="ceilometer-central-agent" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541563 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="37dd6241-1218-4994-9fa1-75062ec38165" containerName="ceilometer-central-agent" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541801 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="37dd6241-1218-4994-9fa1-75062ec38165" containerName="ceilometer-notification-agent" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541819 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b37d23e-84cc-4059-a109-18fec66cd168" containerName="mariadb-database-create" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541835 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="37dd6241-1218-4994-9fa1-75062ec38165" containerName="ceilometer-central-agent" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541848 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="afd801e2-136a-408b-a7e6-ab9a8dcfdd3b" containerName="mariadb-database-create" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541858 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="37dd6241-1218-4994-9fa1-75062ec38165" containerName="sg-core" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541873 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="37dd6241-1218-4994-9fa1-75062ec38165" containerName="proxy-httpd" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541882 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="5349ab78-1643-47e8-bfca-20d31e2f459f" containerName="mariadb-database-create" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541896 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cc0715e-34d0-4d5e-a8cc-5809adc6e264" containerName="mariadb-account-create-update" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541905 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c560b58-f036-4946-aca6-d59c9502954e" containerName="mariadb-account-create-update" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.548056 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.559743 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.561339 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.561616 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.561687 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.566609 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-207d-account-create-update-n289g" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.621439 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c560b58-f036-4946-aca6-d59c9502954e-operator-scripts\") pod \"8c560b58-f036-4946-aca6-d59c9502954e\" (UID: \"8c560b58-f036-4946-aca6-d59c9502954e\") " Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.621589 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8w8pk\" (UniqueName: \"kubernetes.io/projected/d60bb61f-5204-4149-9922-70c6b0916c48-kube-api-access-8w8pk\") pod \"d60bb61f-5204-4149-9922-70c6b0916c48\" (UID: \"d60bb61f-5204-4149-9922-70c6b0916c48\") " Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.621643 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p49g4\" (UniqueName: \"kubernetes.io/projected/8c560b58-f036-4946-aca6-d59c9502954e-kube-api-access-p49g4\") pod \"8c560b58-f036-4946-aca6-d59c9502954e\" (UID: \"8c560b58-f036-4946-aca6-d59c9502954e\") " Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.621730 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d60bb61f-5204-4149-9922-70c6b0916c48-operator-scripts\") pod \"d60bb61f-5204-4149-9922-70c6b0916c48\" (UID: \"d60bb61f-5204-4149-9922-70c6b0916c48\") " Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.622655 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-scripts\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.623022 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c560b58-f036-4946-aca6-d59c9502954e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c560b58-f036-4946-aca6-d59c9502954e" (UID: "8c560b58-f036-4946-aca6-d59c9502954e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.623131 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.623153 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.623473 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65b9b146-d0fa-4da2-8d0a-a6896f57895b-log-httpd\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.623621 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.623851 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8pfd\" (UniqueName: \"kubernetes.io/projected/65b9b146-d0fa-4da2-8d0a-a6896f57895b-kube-api-access-t8pfd\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.623903 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65b9b146-d0fa-4da2-8d0a-a6896f57895b-run-httpd\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.623971 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-config-data\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.630363 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c560b58-f036-4946-aca6-d59c9502954e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.630646 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d60bb61f-5204-4149-9922-70c6b0916c48-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d60bb61f-5204-4149-9922-70c6b0916c48" (UID: "d60bb61f-5204-4149-9922-70c6b0916c48"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.647484 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d60bb61f-5204-4149-9922-70c6b0916c48-kube-api-access-8w8pk" (OuterVolumeSpecName: "kube-api-access-8w8pk") pod "d60bb61f-5204-4149-9922-70c6b0916c48" (UID: "d60bb61f-5204-4149-9922-70c6b0916c48"). InnerVolumeSpecName "kube-api-access-8w8pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.658016 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c560b58-f036-4946-aca6-d59c9502954e-kube-api-access-p49g4" (OuterVolumeSpecName: "kube-api-access-p49g4") pod "8c560b58-f036-4946-aca6-d59c9502954e" (UID: "8c560b58-f036-4946-aca6-d59c9502954e"). InnerVolumeSpecName "kube-api-access-p49g4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.732115 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65b9b146-d0fa-4da2-8d0a-a6896f57895b-log-httpd\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.732197 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.732250 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8pfd\" (UniqueName: \"kubernetes.io/projected/65b9b146-d0fa-4da2-8d0a-a6896f57895b-kube-api-access-t8pfd\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.732283 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65b9b146-d0fa-4da2-8d0a-a6896f57895b-run-httpd\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.732322 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-config-data\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.732342 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-scripts\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.732436 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.732463 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.732546 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8w8pk\" (UniqueName: \"kubernetes.io/projected/d60bb61f-5204-4149-9922-70c6b0916c48-kube-api-access-8w8pk\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.732560 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p49g4\" (UniqueName: \"kubernetes.io/projected/8c560b58-f036-4946-aca6-d59c9502954e-kube-api-access-p49g4\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.732574 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d60bb61f-5204-4149-9922-70c6b0916c48-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.732593 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65b9b146-d0fa-4da2-8d0a-a6896f57895b-log-httpd\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.732904 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65b9b146-d0fa-4da2-8d0a-a6896f57895b-run-httpd\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.737343 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.739272 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-config-data\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.739309 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.740455 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-scripts\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.742492 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.751542 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8pfd\" (UniqueName: \"kubernetes.io/projected/65b9b146-d0fa-4da2-8d0a-a6896f57895b-kube-api-access-t8pfd\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.880080 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:19:05 crc kubenswrapper[4593]: I0129 11:19:05.097672 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37dd6241-1218-4994-9fa1-75062ec38165" path="/var/lib/kubelet/pods/37dd6241-1218-4994-9fa1-75062ec38165/volumes" Jan 29 11:19:05 crc kubenswrapper[4593]: I0129 11:19:05.213324 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:05 crc kubenswrapper[4593]: I0129 11:19:05.331670 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-bbb2-account-create-update-nq54g" event={"ID":"8c560b58-f036-4946-aca6-d59c9502954e","Type":"ContainerDied","Data":"44d5e9852fdbff2c2f57298b319bc2aac423abcdb37ecfe12370febe05fe491f"} Jan 29 11:19:05 crc kubenswrapper[4593]: I0129 11:19:05.331728 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44d5e9852fdbff2c2f57298b319bc2aac423abcdb37ecfe12370febe05fe491f" Jan 29 11:19:05 crc kubenswrapper[4593]: I0129 11:19:05.331806 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-bbb2-account-create-update-nq54g" Jan 29 11:19:05 crc kubenswrapper[4593]: I0129 11:19:05.334597 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-207d-account-create-update-n289g" event={"ID":"d60bb61f-5204-4149-9922-70c6b0916c48","Type":"ContainerDied","Data":"b589e21f0266150b72b75e48575c70865e45ffe8e3a984bb6e0a7d1e0ce27721"} Jan 29 11:19:05 crc kubenswrapper[4593]: I0129 11:19:05.334678 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b589e21f0266150b72b75e48575c70865e45ffe8e3a984bb6e0a7d1e0ce27721" Jan 29 11:19:05 crc kubenswrapper[4593]: I0129 11:19:05.334738 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-207d-account-create-update-n289g" Jan 29 11:19:05 crc kubenswrapper[4593]: I0129 11:19:05.336470 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65b9b146-d0fa-4da2-8d0a-a6896f57895b","Type":"ContainerStarted","Data":"fd0e610cbd8e4e7a281669c1ec869227753d76061275b3b46254e309d0addeb7"} Jan 29 11:19:06 crc kubenswrapper[4593]: I0129 11:19:06.348444 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65b9b146-d0fa-4da2-8d0a-a6896f57895b","Type":"ContainerStarted","Data":"baa1893081a9ffd09dbe982049f564bb11e4a6d94432ca7316021323ac31f6b3"} Jan 29 11:19:07 crc kubenswrapper[4593]: I0129 11:19:07.362093 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65b9b146-d0fa-4da2-8d0a-a6896f57895b","Type":"ContainerStarted","Data":"1dcf72accedd5617ce4ca3dcfdfdaf51830248482923c5198609e5deb5c5b3a6"} Jan 29 11:19:08 crc kubenswrapper[4593]: I0129 11:19:08.373667 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65b9b146-d0fa-4da2-8d0a-a6896f57895b","Type":"ContainerStarted","Data":"468612e35ac127650687828d94e869098ca3d6a5052cb337e01393ae58067cd1"} Jan 29 11:19:08 crc kubenswrapper[4593]: I0129 11:19:08.826874 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vkj44"] Jan 29 11:19:08 crc kubenswrapper[4593]: E0129 11:19:08.827498 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d60bb61f-5204-4149-9922-70c6b0916c48" containerName="mariadb-account-create-update" Jan 29 11:19:08 crc kubenswrapper[4593]: I0129 11:19:08.827574 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="d60bb61f-5204-4149-9922-70c6b0916c48" containerName="mariadb-account-create-update" Jan 29 11:19:08 crc kubenswrapper[4593]: I0129 11:19:08.827844 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="d60bb61f-5204-4149-9922-70c6b0916c48" containerName="mariadb-account-create-update" Jan 29 11:19:08 crc kubenswrapper[4593]: I0129 11:19:08.828545 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vkj44" Jan 29 11:19:08 crc kubenswrapper[4593]: I0129 11:19:08.830431 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 29 11:19:08 crc kubenswrapper[4593]: I0129 11:19:08.831874 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 11:19:08 crc kubenswrapper[4593]: I0129 11:19:08.832372 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-dv5z9" Jan 29 11:19:08 crc kubenswrapper[4593]: I0129 11:19:08.852871 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vkj44"] Jan 29 11:19:08 crc kubenswrapper[4593]: I0129 11:19:08.915823 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-config-data\") pod \"nova-cell0-conductor-db-sync-vkj44\" (UID: \"9a120fd3-e300-459e-9c9b-dd0f3da25621\") " pod="openstack/nova-cell0-conductor-db-sync-vkj44" Jan 29 11:19:08 crc kubenswrapper[4593]: I0129 11:19:08.915912 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-scripts\") pod \"nova-cell0-conductor-db-sync-vkj44\" (UID: \"9a120fd3-e300-459e-9c9b-dd0f3da25621\") " pod="openstack/nova-cell0-conductor-db-sync-vkj44" Jan 29 11:19:08 crc kubenswrapper[4593]: I0129 11:19:08.915954 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vkj44\" (UID: \"9a120fd3-e300-459e-9c9b-dd0f3da25621\") " pod="openstack/nova-cell0-conductor-db-sync-vkj44" Jan 29 11:19:08 crc kubenswrapper[4593]: I0129 11:19:08.915995 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxmg7\" (UniqueName: \"kubernetes.io/projected/9a120fd3-e300-459e-9c9b-dd0f3da25621-kube-api-access-dxmg7\") pod \"nova-cell0-conductor-db-sync-vkj44\" (UID: \"9a120fd3-e300-459e-9c9b-dd0f3da25621\") " pod="openstack/nova-cell0-conductor-db-sync-vkj44" Jan 29 11:19:09 crc kubenswrapper[4593]: I0129 11:19:09.017462 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-config-data\") pod \"nova-cell0-conductor-db-sync-vkj44\" (UID: \"9a120fd3-e300-459e-9c9b-dd0f3da25621\") " pod="openstack/nova-cell0-conductor-db-sync-vkj44" Jan 29 11:19:09 crc kubenswrapper[4593]: I0129 11:19:09.017557 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-scripts\") pod \"nova-cell0-conductor-db-sync-vkj44\" (UID: \"9a120fd3-e300-459e-9c9b-dd0f3da25621\") " pod="openstack/nova-cell0-conductor-db-sync-vkj44" Jan 29 11:19:09 crc kubenswrapper[4593]: I0129 11:19:09.017589 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vkj44\" (UID: \"9a120fd3-e300-459e-9c9b-dd0f3da25621\") " pod="openstack/nova-cell0-conductor-db-sync-vkj44" Jan 29 11:19:09 crc kubenswrapper[4593]: I0129 11:19:09.017712 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxmg7\" (UniqueName: \"kubernetes.io/projected/9a120fd3-e300-459e-9c9b-dd0f3da25621-kube-api-access-dxmg7\") pod \"nova-cell0-conductor-db-sync-vkj44\" (UID: \"9a120fd3-e300-459e-9c9b-dd0f3da25621\") " pod="openstack/nova-cell0-conductor-db-sync-vkj44" Jan 29 11:19:09 crc kubenswrapper[4593]: I0129 11:19:09.026016 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-config-data\") pod \"nova-cell0-conductor-db-sync-vkj44\" (UID: \"9a120fd3-e300-459e-9c9b-dd0f3da25621\") " pod="openstack/nova-cell0-conductor-db-sync-vkj44" Jan 29 11:19:09 crc kubenswrapper[4593]: I0129 11:19:09.026200 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-scripts\") pod \"nova-cell0-conductor-db-sync-vkj44\" (UID: \"9a120fd3-e300-459e-9c9b-dd0f3da25621\") " pod="openstack/nova-cell0-conductor-db-sync-vkj44" Jan 29 11:19:09 crc kubenswrapper[4593]: I0129 11:19:09.026827 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vkj44\" (UID: \"9a120fd3-e300-459e-9c9b-dd0f3da25621\") " pod="openstack/nova-cell0-conductor-db-sync-vkj44" Jan 29 11:19:09 crc kubenswrapper[4593]: I0129 11:19:09.043766 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxmg7\" (UniqueName: \"kubernetes.io/projected/9a120fd3-e300-459e-9c9b-dd0f3da25621-kube-api-access-dxmg7\") pod \"nova-cell0-conductor-db-sync-vkj44\" (UID: \"9a120fd3-e300-459e-9c9b-dd0f3da25621\") " pod="openstack/nova-cell0-conductor-db-sync-vkj44" Jan 29 11:19:09 crc kubenswrapper[4593]: I0129 11:19:09.146500 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vkj44" Jan 29 11:19:09 crc kubenswrapper[4593]: I0129 11:19:09.714029 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vkj44"] Jan 29 11:19:10 crc kubenswrapper[4593]: I0129 11:19:10.422949 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vkj44" event={"ID":"9a120fd3-e300-459e-9c9b-dd0f3da25621","Type":"ContainerStarted","Data":"5657eeacbcf8694db60da42cd98750e99517877fa702ba31f32e45b7a57b37a1"} Jan 29 11:19:11 crc kubenswrapper[4593]: I0129 11:19:11.109133 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:19:11 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:19:11 crc kubenswrapper[4593]: > Jan 29 11:19:11 crc kubenswrapper[4593]: I0129 11:19:11.442231 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65b9b146-d0fa-4da2-8d0a-a6896f57895b","Type":"ContainerStarted","Data":"e8a7bd9f797876139eb4c0c8b43df0d5093bd51585dcc4c1e1a31c81b63ced28"} Jan 29 11:19:11 crc kubenswrapper[4593]: I0129 11:19:11.442502 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 11:19:11 crc kubenswrapper[4593]: I0129 11:19:11.475893 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.454223861 podStartE2EDuration="7.475864831s" podCreationTimestamp="2026-01-29 11:19:04 +0000 UTC" firstStartedPulling="2026-01-29 11:19:05.230938383 +0000 UTC m=+1211.103972574" lastFinishedPulling="2026-01-29 11:19:10.252579353 +0000 UTC m=+1216.125613544" observedRunningTime="2026-01-29 11:19:11.469509149 +0000 UTC m=+1217.342543340" watchObservedRunningTime="2026-01-29 11:19:11.475864831 +0000 UTC m=+1217.348899022" Jan 29 11:19:21 crc kubenswrapper[4593]: I0129 11:19:21.106536 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:19:21 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:19:21 crc kubenswrapper[4593]: > Jan 29 11:19:22 crc kubenswrapper[4593]: I0129 11:19:22.839344 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:22 crc kubenswrapper[4593]: I0129 11:19:22.840747 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="ceilometer-central-agent" containerID="cri-o://baa1893081a9ffd09dbe982049f564bb11e4a6d94432ca7316021323ac31f6b3" gracePeriod=30 Jan 29 11:19:22 crc kubenswrapper[4593]: I0129 11:19:22.840795 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="sg-core" containerID="cri-o://468612e35ac127650687828d94e869098ca3d6a5052cb337e01393ae58067cd1" gracePeriod=30 Jan 29 11:19:22 crc kubenswrapper[4593]: I0129 11:19:22.840822 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="ceilometer-notification-agent" containerID="cri-o://1dcf72accedd5617ce4ca3dcfdfdaf51830248482923c5198609e5deb5c5b3a6" gracePeriod=30 Jan 29 11:19:22 crc kubenswrapper[4593]: I0129 11:19:22.840805 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="proxy-httpd" containerID="cri-o://e8a7bd9f797876139eb4c0c8b43df0d5093bd51585dcc4c1e1a31c81b63ced28" gracePeriod=30 Jan 29 11:19:22 crc kubenswrapper[4593]: I0129 11:19:22.865375 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.184:3000/\": EOF" Jan 29 11:19:24 crc kubenswrapper[4593]: I0129 11:19:24.581008 4593 generic.go:334] "Generic (PLEG): container finished" podID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerID="e8a7bd9f797876139eb4c0c8b43df0d5093bd51585dcc4c1e1a31c81b63ced28" exitCode=0 Jan 29 11:19:24 crc kubenswrapper[4593]: I0129 11:19:24.581395 4593 generic.go:334] "Generic (PLEG): container finished" podID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerID="468612e35ac127650687828d94e869098ca3d6a5052cb337e01393ae58067cd1" exitCode=2 Jan 29 11:19:24 crc kubenswrapper[4593]: I0129 11:19:24.581243 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65b9b146-d0fa-4da2-8d0a-a6896f57895b","Type":"ContainerDied","Data":"e8a7bd9f797876139eb4c0c8b43df0d5093bd51585dcc4c1e1a31c81b63ced28"} Jan 29 11:19:24 crc kubenswrapper[4593]: I0129 11:19:24.581444 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65b9b146-d0fa-4da2-8d0a-a6896f57895b","Type":"ContainerDied","Data":"468612e35ac127650687828d94e869098ca3d6a5052cb337e01393ae58067cd1"} Jan 29 11:19:25 crc kubenswrapper[4593]: I0129 11:19:25.596795 4593 generic.go:334] "Generic (PLEG): container finished" podID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerID="d530af95b0eed70c00fd912ebcf7a37fa3a57fbb18ac1239a4c7320a7f27c6af" exitCode=137 Jan 29 11:19:25 crc kubenswrapper[4593]: I0129 11:19:25.596927 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf566cdb-kbm9z" event={"ID":"b9761a4f-8669-4e74-9f8e-ed8b9778af11","Type":"ContainerDied","Data":"d530af95b0eed70c00fd912ebcf7a37fa3a57fbb18ac1239a4c7320a7f27c6af"} Jan 29 11:19:25 crc kubenswrapper[4593]: I0129 11:19:25.597794 4593 scope.go:117] "RemoveContainer" containerID="a15a1a862b6057b76f95edeb2bb41d937e5e017b829f9f7c6c63b71068d74996" Jan 29 11:19:25 crc kubenswrapper[4593]: I0129 11:19:25.602881 4593 generic.go:334] "Generic (PLEG): container finished" podID="be4a01cd-2eb7-48e8-8a7e-eb02f8851188" containerID="b268f526e5a04b5381dd6c521b7785de6e18d74e1d8c1ba48d2b1ab6cb3e4972" exitCode=137 Jan 29 11:19:25 crc kubenswrapper[4593]: I0129 11:19:25.602956 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bdffb4784-5zp8q" event={"ID":"be4a01cd-2eb7-48e8-8a7e-eb02f8851188","Type":"ContainerDied","Data":"b268f526e5a04b5381dd6c521b7785de6e18d74e1d8c1ba48d2b1ab6cb3e4972"} Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.641787 4593 scope.go:117] "RemoveContainer" containerID="948ff5eda4c7a4e3a5023888e59c0f30a788f7ad09bc8aba86ab19e010a4eeb1" Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.676065 4593 generic.go:334] "Generic (PLEG): container finished" podID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerID="1dcf72accedd5617ce4ca3dcfdfdaf51830248482923c5198609e5deb5c5b3a6" exitCode=0 Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.676103 4593 generic.go:334] "Generic (PLEG): container finished" podID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerID="baa1893081a9ffd09dbe982049f564bb11e4a6d94432ca7316021323ac31f6b3" exitCode=0 Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.676127 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65b9b146-d0fa-4da2-8d0a-a6896f57895b","Type":"ContainerDied","Data":"1dcf72accedd5617ce4ca3dcfdfdaf51830248482923c5198609e5deb5c5b3a6"} Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.676158 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65b9b146-d0fa-4da2-8d0a-a6896f57895b","Type":"ContainerDied","Data":"baa1893081a9ffd09dbe982049f564bb11e4a6d94432ca7316021323ac31f6b3"} Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.808450 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.918346 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-scripts\") pod \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.918523 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65b9b146-d0fa-4da2-8d0a-a6896f57895b-log-httpd\") pod \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.918548 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-combined-ca-bundle\") pod \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.918594 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-ceilometer-tls-certs\") pod \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.918691 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65b9b146-d0fa-4da2-8d0a-a6896f57895b-run-httpd\") pod \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.918724 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8pfd\" (UniqueName: \"kubernetes.io/projected/65b9b146-d0fa-4da2-8d0a-a6896f57895b-kube-api-access-t8pfd\") pod \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.918762 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-config-data\") pod \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.918785 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-sg-core-conf-yaml\") pod \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.929557 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65b9b146-d0fa-4da2-8d0a-a6896f57895b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "65b9b146-d0fa-4da2-8d0a-a6896f57895b" (UID: "65b9b146-d0fa-4da2-8d0a-a6896f57895b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.929927 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65b9b146-d0fa-4da2-8d0a-a6896f57895b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "65b9b146-d0fa-4da2-8d0a-a6896f57895b" (UID: "65b9b146-d0fa-4da2-8d0a-a6896f57895b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.934869 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65b9b146-d0fa-4da2-8d0a-a6896f57895b-kube-api-access-t8pfd" (OuterVolumeSpecName: "kube-api-access-t8pfd") pod "65b9b146-d0fa-4da2-8d0a-a6896f57895b" (UID: "65b9b146-d0fa-4da2-8d0a-a6896f57895b"). InnerVolumeSpecName "kube-api-access-t8pfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.947006 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-scripts" (OuterVolumeSpecName: "scripts") pod "65b9b146-d0fa-4da2-8d0a-a6896f57895b" (UID: "65b9b146-d0fa-4da2-8d0a-a6896f57895b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.992703 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "65b9b146-d0fa-4da2-8d0a-a6896f57895b" (UID: "65b9b146-d0fa-4da2-8d0a-a6896f57895b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.021994 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.022028 4593 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65b9b146-d0fa-4da2-8d0a-a6896f57895b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.022040 4593 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.022055 4593 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65b9b146-d0fa-4da2-8d0a-a6896f57895b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.022065 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8pfd\" (UniqueName: \"kubernetes.io/projected/65b9b146-d0fa-4da2-8d0a-a6896f57895b-kube-api-access-t8pfd\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.033789 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "65b9b146-d0fa-4da2-8d0a-a6896f57895b" (UID: "65b9b146-d0fa-4da2-8d0a-a6896f57895b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.034291 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65b9b146-d0fa-4da2-8d0a-a6896f57895b" (UID: "65b9b146-d0fa-4da2-8d0a-a6896f57895b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.098696 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-config-data" (OuterVolumeSpecName: "config-data") pod "65b9b146-d0fa-4da2-8d0a-a6896f57895b" (UID: "65b9b146-d0fa-4da2-8d0a-a6896f57895b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.123450 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.123484 4593 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.123496 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.704825 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65b9b146-d0fa-4da2-8d0a-a6896f57895b","Type":"ContainerDied","Data":"fd0e610cbd8e4e7a281669c1ec869227753d76061275b3b46254e309d0addeb7"} Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.705213 4593 scope.go:117] "RemoveContainer" containerID="e8a7bd9f797876139eb4c0c8b43df0d5093bd51585dcc4c1e1a31c81b63ced28" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.705378 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.719786 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vkj44" event={"ID":"9a120fd3-e300-459e-9c9b-dd0f3da25621","Type":"ContainerStarted","Data":"81d2ae81ac7fd09960ec8dcecfdd7fb40c2612e8262393b7c2c13c07e2588b6b"} Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.730385 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf566cdb-kbm9z" event={"ID":"b9761a4f-8669-4e74-9f8e-ed8b9778af11","Type":"ContainerStarted","Data":"3d261a3c68b7921bd914d1e7f66292aa43d7dcf78e137210f6cac9b61a927909"} Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.748044 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bdffb4784-5zp8q" event={"ID":"be4a01cd-2eb7-48e8-8a7e-eb02f8851188","Type":"ContainerStarted","Data":"adc17d8c83f12504baffeb49cb0d2af04cf61eab5f1267756b9ff12b2edb5285"} Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.754272 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-vkj44" podStartSLOduration=2.704186543 podStartE2EDuration="20.754252774s" podCreationTimestamp="2026-01-29 11:19:08 +0000 UTC" firstStartedPulling="2026-01-29 11:19:09.726827045 +0000 UTC m=+1215.599861236" lastFinishedPulling="2026-01-29 11:19:27.776893276 +0000 UTC m=+1233.649927467" observedRunningTime="2026-01-29 11:19:28.741370135 +0000 UTC m=+1234.614404336" watchObservedRunningTime="2026-01-29 11:19:28.754252774 +0000 UTC m=+1234.627286965" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.782237 4593 scope.go:117] "RemoveContainer" containerID="468612e35ac127650687828d94e869098ca3d6a5052cb337e01393ae58067cd1" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.824690 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.831784 4593 scope.go:117] "RemoveContainer" containerID="1dcf72accedd5617ce4ca3dcfdfdaf51830248482923c5198609e5deb5c5b3a6" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.850378 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:28 crc kubenswrapper[4593]: E0129 11:19:28.876946 4593 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65b9b146_d0fa_4da2_8d0a_a6896f57895b.slice/crio-fd0e610cbd8e4e7a281669c1ec869227753d76061275b3b46254e309d0addeb7\": RecentStats: unable to find data in memory cache]" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.878445 4593 scope.go:117] "RemoveContainer" containerID="baa1893081a9ffd09dbe982049f564bb11e4a6d94432ca7316021323ac31f6b3" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.886409 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:28 crc kubenswrapper[4593]: E0129 11:19:28.886978 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="ceilometer-notification-agent" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.887004 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="ceilometer-notification-agent" Jan 29 11:19:28 crc kubenswrapper[4593]: E0129 11:19:28.887027 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="ceilometer-central-agent" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.887035 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="ceilometer-central-agent" Jan 29 11:19:28 crc kubenswrapper[4593]: E0129 11:19:28.887053 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="proxy-httpd" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.887061 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="proxy-httpd" Jan 29 11:19:28 crc kubenswrapper[4593]: E0129 11:19:28.887103 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="sg-core" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.887112 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="sg-core" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.887619 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="sg-core" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.887674 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="ceilometer-notification-agent" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.887689 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="ceilometer-central-agent" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.887700 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="proxy-httpd" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.890507 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.894884 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.895258 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.898016 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.926279 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.040661 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.040751 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.040827 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-config-data\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.040876 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-scripts\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.040940 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.040987 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df1f4c00-33e4-4464-8ce0-c188cd6c2098-log-httpd\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.041040 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df1f4c00-33e4-4464-8ce0-c188cd6c2098-run-httpd\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.041094 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppkf8\" (UniqueName: \"kubernetes.io/projected/df1f4c00-33e4-4464-8ce0-c188cd6c2098-kube-api-access-ppkf8\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.086555 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" path="/var/lib/kubelet/pods/65b9b146-d0fa-4da2-8d0a-a6896f57895b/volumes" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.142442 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df1f4c00-33e4-4464-8ce0-c188cd6c2098-log-httpd\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.142761 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df1f4c00-33e4-4464-8ce0-c188cd6c2098-run-httpd\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.142825 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppkf8\" (UniqueName: \"kubernetes.io/projected/df1f4c00-33e4-4464-8ce0-c188cd6c2098-kube-api-access-ppkf8\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.142853 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.142893 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.142924 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-config-data\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.142971 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-scripts\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.143021 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.144339 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df1f4c00-33e4-4464-8ce0-c188cd6c2098-run-httpd\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.144449 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df1f4c00-33e4-4464-8ce0-c188cd6c2098-log-httpd\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.152022 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.152096 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.166357 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.167977 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-config-data\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.173746 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-scripts\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.178522 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppkf8\" (UniqueName: \"kubernetes.io/projected/df1f4c00-33e4-4464-8ce0-c188cd6c2098-kube-api-access-ppkf8\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.221990 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: W0129 11:19:29.727718 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf1f4c00_33e4_4464_8ce0_c188cd6c2098.slice/crio-b893608d9e63ce09c17a2cd3bafb65d7a0e42cb80d9169775a8751adace0b1ce WatchSource:0}: Error finding container b893608d9e63ce09c17a2cd3bafb65d7a0e42cb80d9169775a8751adace0b1ce: Status 404 returned error can't find the container with id b893608d9e63ce09c17a2cd3bafb65d7a0e42cb80d9169775a8751adace0b1ce Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.733828 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.759920 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df1f4c00-33e4-4464-8ce0-c188cd6c2098","Type":"ContainerStarted","Data":"b893608d9e63ce09c17a2cd3bafb65d7a0e42cb80d9169775a8751adace0b1ce"} Jan 29 11:19:30 crc kubenswrapper[4593]: I0129 11:19:30.776095 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df1f4c00-33e4-4464-8ce0-c188cd6c2098","Type":"ContainerStarted","Data":"1a27e6d61d6cc1b63ae220bc5e31e7cc48cdb73bb715859c62437115ad55ae83"} Jan 29 11:19:31 crc kubenswrapper[4593]: I0129 11:19:31.102885 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:19:31 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:19:31 crc kubenswrapper[4593]: > Jan 29 11:19:31 crc kubenswrapper[4593]: I0129 11:19:31.789781 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df1f4c00-33e4-4464-8ce0-c188cd6c2098","Type":"ContainerStarted","Data":"50bb98a7a38e179861ea0aae439e5ea1f8482ffc3a50d97a0cb8efb1c4a7ef98"} Jan 29 11:19:33 crc kubenswrapper[4593]: I0129 11:19:33.808309 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df1f4c00-33e4-4464-8ce0-c188cd6c2098","Type":"ContainerStarted","Data":"884331dbae2ce84a16d83ee26f74728e15b739dbd7e33170f3a8bcb1427b10d4"} Jan 29 11:19:33 crc kubenswrapper[4593]: I0129 11:19:33.946771 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:19:33 crc kubenswrapper[4593]: I0129 11:19:33.946825 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:19:34 crc kubenswrapper[4593]: I0129 11:19:34.909783 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:19:34 crc kubenswrapper[4593]: I0129 11:19:34.910120 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:19:35 crc kubenswrapper[4593]: I0129 11:19:35.049754 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:19:35 crc kubenswrapper[4593]: I0129 11:19:35.049818 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:19:36 crc kubenswrapper[4593]: I0129 11:19:36.712395 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:36 crc kubenswrapper[4593]: I0129 11:19:36.850414 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df1f4c00-33e4-4464-8ce0-c188cd6c2098","Type":"ContainerStarted","Data":"c04ac5e6acf941bc843033b7be031ca946b9bac6616b3eb6fadf35410a4a4a6b"} Jan 29 11:19:36 crc kubenswrapper[4593]: I0129 11:19:36.850675 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 11:19:36 crc kubenswrapper[4593]: I0129 11:19:36.881773 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.656291713 podStartE2EDuration="8.881750574s" podCreationTimestamp="2026-01-29 11:19:28 +0000 UTC" firstStartedPulling="2026-01-29 11:19:29.729539835 +0000 UTC m=+1235.602574026" lastFinishedPulling="2026-01-29 11:19:35.954998686 +0000 UTC m=+1241.828032887" observedRunningTime="2026-01-29 11:19:36.878935227 +0000 UTC m=+1242.751969418" watchObservedRunningTime="2026-01-29 11:19:36.881750574 +0000 UTC m=+1242.754784765" Jan 29 11:19:37 crc kubenswrapper[4593]: I0129 11:19:37.859227 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerName="ceilometer-central-agent" containerID="cri-o://1a27e6d61d6cc1b63ae220bc5e31e7cc48cdb73bb715859c62437115ad55ae83" gracePeriod=30 Jan 29 11:19:37 crc kubenswrapper[4593]: I0129 11:19:37.859414 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerName="sg-core" containerID="cri-o://884331dbae2ce84a16d83ee26f74728e15b739dbd7e33170f3a8bcb1427b10d4" gracePeriod=30 Jan 29 11:19:37 crc kubenswrapper[4593]: I0129 11:19:37.859441 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerName="ceilometer-notification-agent" containerID="cri-o://50bb98a7a38e179861ea0aae439e5ea1f8482ffc3a50d97a0cb8efb1c4a7ef98" gracePeriod=30 Jan 29 11:19:37 crc kubenswrapper[4593]: I0129 11:19:37.859514 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerName="proxy-httpd" containerID="cri-o://c04ac5e6acf941bc843033b7be031ca946b9bac6616b3eb6fadf35410a4a4a6b" gracePeriod=30 Jan 29 11:19:38 crc kubenswrapper[4593]: I0129 11:19:38.876710 4593 generic.go:334] "Generic (PLEG): container finished" podID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerID="c04ac5e6acf941bc843033b7be031ca946b9bac6616b3eb6fadf35410a4a4a6b" exitCode=0 Jan 29 11:19:38 crc kubenswrapper[4593]: I0129 11:19:38.877116 4593 generic.go:334] "Generic (PLEG): container finished" podID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerID="884331dbae2ce84a16d83ee26f74728e15b739dbd7e33170f3a8bcb1427b10d4" exitCode=2 Jan 29 11:19:38 crc kubenswrapper[4593]: I0129 11:19:38.877129 4593 generic.go:334] "Generic (PLEG): container finished" podID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerID="50bb98a7a38e179861ea0aae439e5ea1f8482ffc3a50d97a0cb8efb1c4a7ef98" exitCode=0 Jan 29 11:19:38 crc kubenswrapper[4593]: I0129 11:19:38.876919 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df1f4c00-33e4-4464-8ce0-c188cd6c2098","Type":"ContainerDied","Data":"c04ac5e6acf941bc843033b7be031ca946b9bac6616b3eb6fadf35410a4a4a6b"} Jan 29 11:19:38 crc kubenswrapper[4593]: I0129 11:19:38.877165 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df1f4c00-33e4-4464-8ce0-c188cd6c2098","Type":"ContainerDied","Data":"884331dbae2ce84a16d83ee26f74728e15b739dbd7e33170f3a8bcb1427b10d4"} Jan 29 11:19:38 crc kubenswrapper[4593]: I0129 11:19:38.877189 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df1f4c00-33e4-4464-8ce0-c188cd6c2098","Type":"ContainerDied","Data":"50bb98a7a38e179861ea0aae439e5ea1f8482ffc3a50d97a0cb8efb1c4a7ef98"} Jan 29 11:19:41 crc kubenswrapper[4593]: I0129 11:19:41.102008 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:19:41 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:19:41 crc kubenswrapper[4593]: > Jan 29 11:19:41 crc kubenswrapper[4593]: I0129 11:19:41.102551 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:19:41 crc kubenswrapper[4593]: I0129 11:19:41.103313 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"01cfca19ba6e7095e676495e45dbce66f5a74d2b87eaab4f83bc77de55811a95"} pod="openshift-marketplace/redhat-operators-k4l8n" containerMessage="Container registry-server failed startup probe, will be restarted" Jan 29 11:19:41 crc kubenswrapper[4593]: I0129 11:19:41.103344 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" containerID="cri-o://01cfca19ba6e7095e676495e45dbce66f5a74d2b87eaab4f83bc77de55811a95" gracePeriod=30 Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.668312 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.790717 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-sg-core-conf-yaml\") pod \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.790820 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df1f4c00-33e4-4464-8ce0-c188cd6c2098-log-httpd\") pod \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.790943 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-scripts\") pod \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.790969 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-combined-ca-bundle\") pod \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.791027 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-ceilometer-tls-certs\") pod \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.791068 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppkf8\" (UniqueName: \"kubernetes.io/projected/df1f4c00-33e4-4464-8ce0-c188cd6c2098-kube-api-access-ppkf8\") pod \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.791204 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df1f4c00-33e4-4464-8ce0-c188cd6c2098-run-httpd\") pod \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.791306 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-config-data\") pod \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.791319 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df1f4c00-33e4-4464-8ce0-c188cd6c2098-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "df1f4c00-33e4-4464-8ce0-c188cd6c2098" (UID: "df1f4c00-33e4-4464-8ce0-c188cd6c2098"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.791510 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df1f4c00-33e4-4464-8ce0-c188cd6c2098-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "df1f4c00-33e4-4464-8ce0-c188cd6c2098" (UID: "df1f4c00-33e4-4464-8ce0-c188cd6c2098"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.792111 4593 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df1f4c00-33e4-4464-8ce0-c188cd6c2098-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.792128 4593 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df1f4c00-33e4-4464-8ce0-c188cd6c2098-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.809600 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df1f4c00-33e4-4464-8ce0-c188cd6c2098-kube-api-access-ppkf8" (OuterVolumeSpecName: "kube-api-access-ppkf8") pod "df1f4c00-33e4-4464-8ce0-c188cd6c2098" (UID: "df1f4c00-33e4-4464-8ce0-c188cd6c2098"). InnerVolumeSpecName "kube-api-access-ppkf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.812285 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-scripts" (OuterVolumeSpecName: "scripts") pod "df1f4c00-33e4-4464-8ce0-c188cd6c2098" (UID: "df1f4c00-33e4-4464-8ce0-c188cd6c2098"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.827906 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "df1f4c00-33e4-4464-8ce0-c188cd6c2098" (UID: "df1f4c00-33e4-4464-8ce0-c188cd6c2098"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.877976 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "df1f4c00-33e4-4464-8ce0-c188cd6c2098" (UID: "df1f4c00-33e4-4464-8ce0-c188cd6c2098"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.894574 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.894603 4593 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.894616 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppkf8\" (UniqueName: \"kubernetes.io/projected/df1f4c00-33e4-4464-8ce0-c188cd6c2098-kube-api-access-ppkf8\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.894624 4593 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.896485 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df1f4c00-33e4-4464-8ce0-c188cd6c2098" (UID: "df1f4c00-33e4-4464-8ce0-c188cd6c2098"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.909315 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-config-data" (OuterVolumeSpecName: "config-data") pod "df1f4c00-33e4-4464-8ce0-c188cd6c2098" (UID: "df1f4c00-33e4-4464-8ce0-c188cd6c2098"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.921854 4593 generic.go:334] "Generic (PLEG): container finished" podID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerID="1a27e6d61d6cc1b63ae220bc5e31e7cc48cdb73bb715859c62437115ad55ae83" exitCode=0 Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.921904 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df1f4c00-33e4-4464-8ce0-c188cd6c2098","Type":"ContainerDied","Data":"1a27e6d61d6cc1b63ae220bc5e31e7cc48cdb73bb715859c62437115ad55ae83"} Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.921949 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.921975 4593 scope.go:117] "RemoveContainer" containerID="c04ac5e6acf941bc843033b7be031ca946b9bac6616b3eb6fadf35410a4a4a6b" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.921958 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df1f4c00-33e4-4464-8ce0-c188cd6c2098","Type":"ContainerDied","Data":"b893608d9e63ce09c17a2cd3bafb65d7a0e42cb80d9169775a8751adace0b1ce"} Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.955843 4593 scope.go:117] "RemoveContainer" containerID="884331dbae2ce84a16d83ee26f74728e15b739dbd7e33170f3a8bcb1427b10d4" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.979233 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.990893 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.996316 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.996353 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.000791 4593 scope.go:117] "RemoveContainer" containerID="50bb98a7a38e179861ea0aae439e5ea1f8482ffc3a50d97a0cb8efb1c4a7ef98" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.004235 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:44 crc kubenswrapper[4593]: E0129 11:19:44.004568 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerName="ceilometer-notification-agent" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.004586 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerName="ceilometer-notification-agent" Jan 29 11:19:44 crc kubenswrapper[4593]: E0129 11:19:44.004604 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerName="proxy-httpd" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.004611 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerName="proxy-httpd" Jan 29 11:19:44 crc kubenswrapper[4593]: E0129 11:19:44.004733 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerName="sg-core" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.004742 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerName="sg-core" Jan 29 11:19:44 crc kubenswrapper[4593]: E0129 11:19:44.004765 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerName="ceilometer-central-agent" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.004771 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerName="ceilometer-central-agent" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.004947 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerName="sg-core" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.004958 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerName="proxy-httpd" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.004969 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerName="ceilometer-central-agent" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.004994 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerName="ceilometer-notification-agent" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.006573 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.012288 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.012570 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.012584 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.024707 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.066264 4593 scope.go:117] "RemoveContainer" containerID="1a27e6d61d6cc1b63ae220bc5e31e7cc48cdb73bb715859c62437115ad55ae83" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.092826 4593 scope.go:117] "RemoveContainer" containerID="c04ac5e6acf941bc843033b7be031ca946b9bac6616b3eb6fadf35410a4a4a6b" Jan 29 11:19:44 crc kubenswrapper[4593]: E0129 11:19:44.094001 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c04ac5e6acf941bc843033b7be031ca946b9bac6616b3eb6fadf35410a4a4a6b\": container with ID starting with c04ac5e6acf941bc843033b7be031ca946b9bac6616b3eb6fadf35410a4a4a6b not found: ID does not exist" containerID="c04ac5e6acf941bc843033b7be031ca946b9bac6616b3eb6fadf35410a4a4a6b" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.094036 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c04ac5e6acf941bc843033b7be031ca946b9bac6616b3eb6fadf35410a4a4a6b"} err="failed to get container status \"c04ac5e6acf941bc843033b7be031ca946b9bac6616b3eb6fadf35410a4a4a6b\": rpc error: code = NotFound desc = could not find container \"c04ac5e6acf941bc843033b7be031ca946b9bac6616b3eb6fadf35410a4a4a6b\": container with ID starting with c04ac5e6acf941bc843033b7be031ca946b9bac6616b3eb6fadf35410a4a4a6b not found: ID does not exist" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.094064 4593 scope.go:117] "RemoveContainer" containerID="884331dbae2ce84a16d83ee26f74728e15b739dbd7e33170f3a8bcb1427b10d4" Jan 29 11:19:44 crc kubenswrapper[4593]: E0129 11:19:44.094390 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"884331dbae2ce84a16d83ee26f74728e15b739dbd7e33170f3a8bcb1427b10d4\": container with ID starting with 884331dbae2ce84a16d83ee26f74728e15b739dbd7e33170f3a8bcb1427b10d4 not found: ID does not exist" containerID="884331dbae2ce84a16d83ee26f74728e15b739dbd7e33170f3a8bcb1427b10d4" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.094410 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"884331dbae2ce84a16d83ee26f74728e15b739dbd7e33170f3a8bcb1427b10d4"} err="failed to get container status \"884331dbae2ce84a16d83ee26f74728e15b739dbd7e33170f3a8bcb1427b10d4\": rpc error: code = NotFound desc = could not find container \"884331dbae2ce84a16d83ee26f74728e15b739dbd7e33170f3a8bcb1427b10d4\": container with ID starting with 884331dbae2ce84a16d83ee26f74728e15b739dbd7e33170f3a8bcb1427b10d4 not found: ID does not exist" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.094423 4593 scope.go:117] "RemoveContainer" containerID="50bb98a7a38e179861ea0aae439e5ea1f8482ffc3a50d97a0cb8efb1c4a7ef98" Jan 29 11:19:44 crc kubenswrapper[4593]: E0129 11:19:44.094836 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50bb98a7a38e179861ea0aae439e5ea1f8482ffc3a50d97a0cb8efb1c4a7ef98\": container with ID starting with 50bb98a7a38e179861ea0aae439e5ea1f8482ffc3a50d97a0cb8efb1c4a7ef98 not found: ID does not exist" containerID="50bb98a7a38e179861ea0aae439e5ea1f8482ffc3a50d97a0cb8efb1c4a7ef98" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.094859 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50bb98a7a38e179861ea0aae439e5ea1f8482ffc3a50d97a0cb8efb1c4a7ef98"} err="failed to get container status \"50bb98a7a38e179861ea0aae439e5ea1f8482ffc3a50d97a0cb8efb1c4a7ef98\": rpc error: code = NotFound desc = could not find container \"50bb98a7a38e179861ea0aae439e5ea1f8482ffc3a50d97a0cb8efb1c4a7ef98\": container with ID starting with 50bb98a7a38e179861ea0aae439e5ea1f8482ffc3a50d97a0cb8efb1c4a7ef98 not found: ID does not exist" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.094874 4593 scope.go:117] "RemoveContainer" containerID="1a27e6d61d6cc1b63ae220bc5e31e7cc48cdb73bb715859c62437115ad55ae83" Jan 29 11:19:44 crc kubenswrapper[4593]: E0129 11:19:44.095269 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a27e6d61d6cc1b63ae220bc5e31e7cc48cdb73bb715859c62437115ad55ae83\": container with ID starting with 1a27e6d61d6cc1b63ae220bc5e31e7cc48cdb73bb715859c62437115ad55ae83 not found: ID does not exist" containerID="1a27e6d61d6cc1b63ae220bc5e31e7cc48cdb73bb715859c62437115ad55ae83" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.095288 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a27e6d61d6cc1b63ae220bc5e31e7cc48cdb73bb715859c62437115ad55ae83"} err="failed to get container status \"1a27e6d61d6cc1b63ae220bc5e31e7cc48cdb73bb715859c62437115ad55ae83\": rpc error: code = NotFound desc = could not find container \"1a27e6d61d6cc1b63ae220bc5e31e7cc48cdb73bb715859c62437115ad55ae83\": container with ID starting with 1a27e6d61d6cc1b63ae220bc5e31e7cc48cdb73bb715859c62437115ad55ae83 not found: ID does not exist" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.200990 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q87z\" (UniqueName: \"kubernetes.io/projected/934ccdca-f1e6-43d2-af69-2efb205bf387-kube-api-access-9q87z\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.201065 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.201115 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/934ccdca-f1e6-43d2-af69-2efb205bf387-run-httpd\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.201174 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.201214 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-scripts\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.201259 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.201290 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/934ccdca-f1e6-43d2-af69-2efb205bf387-log-httpd\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.201362 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-config-data\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.303383 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.303795 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-scripts\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.303856 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.303893 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/934ccdca-f1e6-43d2-af69-2efb205bf387-log-httpd\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.303928 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-config-data\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.304020 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q87z\" (UniqueName: \"kubernetes.io/projected/934ccdca-f1e6-43d2-af69-2efb205bf387-kube-api-access-9q87z\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.304062 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.304102 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/934ccdca-f1e6-43d2-af69-2efb205bf387-run-httpd\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.304771 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/934ccdca-f1e6-43d2-af69-2efb205bf387-run-httpd\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.308968 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.310758 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-config-data\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.311008 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/934ccdca-f1e6-43d2-af69-2efb205bf387-log-httpd\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.313268 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-scripts\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.388179 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.439763 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q87z\" (UniqueName: \"kubernetes.io/projected/934ccdca-f1e6-43d2-af69-2efb205bf387-kube-api-access-9q87z\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.441507 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.646898 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.912453 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 29 11:19:45 crc kubenswrapper[4593]: I0129 11:19:45.050748 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5bdffb4784-5zp8q" podUID="be4a01cd-2eb7-48e8-8a7e-eb02f8851188" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 29 11:19:45 crc kubenswrapper[4593]: I0129 11:19:45.092383 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" path="/var/lib/kubelet/pods/df1f4c00-33e4-4464-8ce0-c188cd6c2098/volumes" Jan 29 11:19:45 crc kubenswrapper[4593]: I0129 11:19:45.189106 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:45 crc kubenswrapper[4593]: W0129 11:19:45.193564 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod934ccdca_f1e6_43d2_af69_2efb205bf387.slice/crio-4dce39e9f6258739668c6759897048e09e8458a8965cc4d5beb204c4759ad763 WatchSource:0}: Error finding container 4dce39e9f6258739668c6759897048e09e8458a8965cc4d5beb204c4759ad763: Status 404 returned error can't find the container with id 4dce39e9f6258739668c6759897048e09e8458a8965cc4d5beb204c4759ad763 Jan 29 11:19:46 crc kubenswrapper[4593]: I0129 11:19:46.481522 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"934ccdca-f1e6-43d2-af69-2efb205bf387","Type":"ContainerStarted","Data":"4dce39e9f6258739668c6759897048e09e8458a8965cc4d5beb204c4759ad763"} Jan 29 11:19:47 crc kubenswrapper[4593]: I0129 11:19:47.500900 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"934ccdca-f1e6-43d2-af69-2efb205bf387","Type":"ContainerStarted","Data":"718067f3b9f8669b499eaa09968b871882953292383cd9cadbaa67bc9b808050"} Jan 29 11:19:48 crc kubenswrapper[4593]: I0129 11:19:48.529064 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"934ccdca-f1e6-43d2-af69-2efb205bf387","Type":"ContainerStarted","Data":"ccb1cce5f72a27026fa0dff03cca969d96af413b780e118d7f695f65f57ee35b"} Jan 29 11:19:48 crc kubenswrapper[4593]: I0129 11:19:48.529434 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"934ccdca-f1e6-43d2-af69-2efb205bf387","Type":"ContainerStarted","Data":"1014e7c08fad200b51dc9f731c6b2a97edba268c54e461a9ca8ef7f2d5441a7f"} Jan 29 11:19:51 crc kubenswrapper[4593]: I0129 11:19:51.571286 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"934ccdca-f1e6-43d2-af69-2efb205bf387","Type":"ContainerStarted","Data":"5650870c53a815d139ee07b273db9e4da617bca758fc88b27ec7225ece9545c9"} Jan 29 11:19:51 crc kubenswrapper[4593]: I0129 11:19:51.573118 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 11:19:51 crc kubenswrapper[4593]: I0129 11:19:51.586267 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k4l8n_9194cbfb-27b9-47e8-90eb-64b9391d0b07/registry-server/0.log" Jan 29 11:19:51 crc kubenswrapper[4593]: I0129 11:19:51.595581 4593 generic.go:334] "Generic (PLEG): container finished" podID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerID="01cfca19ba6e7095e676495e45dbce66f5a74d2b87eaab4f83bc77de55811a95" exitCode=0 Jan 29 11:19:51 crc kubenswrapper[4593]: I0129 11:19:51.595996 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4l8n" event={"ID":"9194cbfb-27b9-47e8-90eb-64b9391d0b07","Type":"ContainerDied","Data":"01cfca19ba6e7095e676495e45dbce66f5a74d2b87eaab4f83bc77de55811a95"} Jan 29 11:19:51 crc kubenswrapper[4593]: I0129 11:19:51.596130 4593 scope.go:117] "RemoveContainer" containerID="392c83c8b20810b83ec9a5ece7d4422790dc84f02f822abe01aa473a1c9a74d9" Jan 29 11:19:51 crc kubenswrapper[4593]: I0129 11:19:51.607296 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.060269705 podStartE2EDuration="8.607277133s" podCreationTimestamp="2026-01-29 11:19:43 +0000 UTC" firstStartedPulling="2026-01-29 11:19:45.196239467 +0000 UTC m=+1251.069273658" lastFinishedPulling="2026-01-29 11:19:50.743246895 +0000 UTC m=+1256.616281086" observedRunningTime="2026-01-29 11:19:51.592338468 +0000 UTC m=+1257.465372659" watchObservedRunningTime="2026-01-29 11:19:51.607277133 +0000 UTC m=+1257.480311324" Jan 29 11:19:52 crc kubenswrapper[4593]: I0129 11:19:52.615716 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4l8n" event={"ID":"9194cbfb-27b9-47e8-90eb-64b9391d0b07","Type":"ContainerStarted","Data":"24a48dac79f9737c737a9a6c9feb17fa992bbfb7616bde5d11369bae535e02b2"} Jan 29 11:19:54 crc kubenswrapper[4593]: I0129 11:19:54.910296 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 29 11:19:55 crc kubenswrapper[4593]: I0129 11:19:55.050043 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5bdffb4784-5zp8q" podUID="be4a01cd-2eb7-48e8-8a7e-eb02f8851188" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 29 11:19:56 crc kubenswrapper[4593]: I0129 11:19:56.668007 4593 generic.go:334] "Generic (PLEG): container finished" podID="9a120fd3-e300-459e-9c9b-dd0f3da25621" containerID="81d2ae81ac7fd09960ec8dcecfdd7fb40c2612e8262393b7c2c13c07e2588b6b" exitCode=0 Jan 29 11:19:56 crc kubenswrapper[4593]: I0129 11:19:56.669139 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vkj44" event={"ID":"9a120fd3-e300-459e-9c9b-dd0f3da25621","Type":"ContainerDied","Data":"81d2ae81ac7fd09960ec8dcecfdd7fb40c2612e8262393b7c2c13c07e2588b6b"} Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.092905 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vkj44" Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.098211 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-combined-ca-bundle\") pod \"9a120fd3-e300-459e-9c9b-dd0f3da25621\" (UID: \"9a120fd3-e300-459e-9c9b-dd0f3da25621\") " Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.098261 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxmg7\" (UniqueName: \"kubernetes.io/projected/9a120fd3-e300-459e-9c9b-dd0f3da25621-kube-api-access-dxmg7\") pod \"9a120fd3-e300-459e-9c9b-dd0f3da25621\" (UID: \"9a120fd3-e300-459e-9c9b-dd0f3da25621\") " Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.098326 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-config-data\") pod \"9a120fd3-e300-459e-9c9b-dd0f3da25621\" (UID: \"9a120fd3-e300-459e-9c9b-dd0f3da25621\") " Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.098452 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-scripts\") pod \"9a120fd3-e300-459e-9c9b-dd0f3da25621\" (UID: \"9a120fd3-e300-459e-9c9b-dd0f3da25621\") " Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.107878 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-scripts" (OuterVolumeSpecName: "scripts") pod "9a120fd3-e300-459e-9c9b-dd0f3da25621" (UID: "9a120fd3-e300-459e-9c9b-dd0f3da25621"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.117890 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a120fd3-e300-459e-9c9b-dd0f3da25621-kube-api-access-dxmg7" (OuterVolumeSpecName: "kube-api-access-dxmg7") pod "9a120fd3-e300-459e-9c9b-dd0f3da25621" (UID: "9a120fd3-e300-459e-9c9b-dd0f3da25621"). InnerVolumeSpecName "kube-api-access-dxmg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.164669 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-config-data" (OuterVolumeSpecName: "config-data") pod "9a120fd3-e300-459e-9c9b-dd0f3da25621" (UID: "9a120fd3-e300-459e-9c9b-dd0f3da25621"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.197570 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a120fd3-e300-459e-9c9b-dd0f3da25621" (UID: "9a120fd3-e300-459e-9c9b-dd0f3da25621"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.201288 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.201324 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.201340 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxmg7\" (UniqueName: \"kubernetes.io/projected/9a120fd3-e300-459e-9c9b-dd0f3da25621-kube-api-access-dxmg7\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.201352 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.710364 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vkj44" event={"ID":"9a120fd3-e300-459e-9c9b-dd0f3da25621","Type":"ContainerDied","Data":"5657eeacbcf8694db60da42cd98750e99517877fa702ba31f32e45b7a57b37a1"} Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.710791 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5657eeacbcf8694db60da42cd98750e99517877fa702ba31f32e45b7a57b37a1" Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.710588 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vkj44" Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.985798 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 11:19:58 crc kubenswrapper[4593]: E0129 11:19:58.986319 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a120fd3-e300-459e-9c9b-dd0f3da25621" containerName="nova-cell0-conductor-db-sync" Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.986344 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a120fd3-e300-459e-9c9b-dd0f3da25621" containerName="nova-cell0-conductor-db-sync" Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.986581 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a120fd3-e300-459e-9c9b-dd0f3da25621" containerName="nova-cell0-conductor-db-sync" Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.987431 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.990188 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-dv5z9" Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.990394 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 11:19:59 crc kubenswrapper[4593]: I0129 11:19:59.003458 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 11:19:59 crc kubenswrapper[4593]: I0129 11:19:59.016469 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:19:59 crc kubenswrapper[4593]: I0129 11:19:59.016558 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkr8l\" (UniqueName: \"kubernetes.io/projected/b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f-kube-api-access-wkr8l\") pod \"nova-cell0-conductor-0\" (UID: \"b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:19:59 crc kubenswrapper[4593]: I0129 11:19:59.016796 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:19:59 crc kubenswrapper[4593]: I0129 11:19:59.120775 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:19:59 crc kubenswrapper[4593]: I0129 11:19:59.120838 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkr8l\" (UniqueName: \"kubernetes.io/projected/b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f-kube-api-access-wkr8l\") pod \"nova-cell0-conductor-0\" (UID: \"b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:19:59 crc kubenswrapper[4593]: I0129 11:19:59.120909 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:19:59 crc kubenswrapper[4593]: I0129 11:19:59.154497 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkr8l\" (UniqueName: \"kubernetes.io/projected/b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f-kube-api-access-wkr8l\") pod \"nova-cell0-conductor-0\" (UID: \"b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:19:59 crc kubenswrapper[4593]: I0129 11:19:59.154578 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:19:59 crc kubenswrapper[4593]: I0129 11:19:59.155393 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:19:59 crc kubenswrapper[4593]: I0129 11:19:59.329957 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 11:19:59 crc kubenswrapper[4593]: I0129 11:19:59.858397 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 11:19:59 crc kubenswrapper[4593]: W0129 11:19:59.859622 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb50238c6_e2ee_4e0b_a9c9_ded7ee100c6f.slice/crio-25d4cf3eba23d9e685a33c5df8dec551aaf6e33f44d956555b74db089039ef5e WatchSource:0}: Error finding container 25d4cf3eba23d9e685a33c5df8dec551aaf6e33f44d956555b74db089039ef5e: Status 404 returned error can't find the container with id 25d4cf3eba23d9e685a33c5df8dec551aaf6e33f44d956555b74db089039ef5e Jan 29 11:20:00 crc kubenswrapper[4593]: I0129 11:20:00.053366 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:20:00 crc kubenswrapper[4593]: I0129 11:20:00.053603 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:20:00 crc kubenswrapper[4593]: I0129 11:20:00.867785 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f","Type":"ContainerStarted","Data":"6f005e0f24fa46ef5dd9f95d49e1d95dfec214ed45107732d9cd041a3d060478"} Jan 29 11:20:00 crc kubenswrapper[4593]: I0129 11:20:00.868971 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 29 11:20:00 crc kubenswrapper[4593]: I0129 11:20:00.869113 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f","Type":"ContainerStarted","Data":"25d4cf3eba23d9e685a33c5df8dec551aaf6e33f44d956555b74db089039ef5e"} Jan 29 11:20:00 crc kubenswrapper[4593]: I0129 11:20:00.893333 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.893310694 podStartE2EDuration="2.893310694s" podCreationTimestamp="2026-01-29 11:19:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:20:00.886622003 +0000 UTC m=+1266.759656194" watchObservedRunningTime="2026-01-29 11:20:00.893310694 +0000 UTC m=+1266.766344885" Jan 29 11:20:01 crc kubenswrapper[4593]: I0129 11:20:01.117617 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:20:01 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:20:01 crc kubenswrapper[4593]: > Jan 29 11:20:03 crc kubenswrapper[4593]: I0129 11:20:03.947088 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:20:03 crc kubenswrapper[4593]: I0129 11:20:03.947883 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:20:03 crc kubenswrapper[4593]: I0129 11:20:03.947976 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 11:20:03 crc kubenswrapper[4593]: I0129 11:20:03.949228 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"000d590ca55db27781027868adeaf4e729be5f85280050b0a93300e017c70002"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:20:03 crc kubenswrapper[4593]: I0129 11:20:03.949358 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://000d590ca55db27781027868adeaf4e729be5f85280050b0a93300e017c70002" gracePeriod=600 Jan 29 11:20:04 crc kubenswrapper[4593]: I0129 11:20:04.910162 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="000d590ca55db27781027868adeaf4e729be5f85280050b0a93300e017c70002" exitCode=0 Jan 29 11:20:04 crc kubenswrapper[4593]: I0129 11:20:04.910257 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"000d590ca55db27781027868adeaf4e729be5f85280050b0a93300e017c70002"} Jan 29 11:20:04 crc kubenswrapper[4593]: I0129 11:20:04.910897 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"6f628dc297b127220882a1d8752d50a08dc9b333c2a314b358e3c3d4a79bcfaa"} Jan 29 11:20:04 crc kubenswrapper[4593]: I0129 11:20:04.910991 4593 scope.go:117] "RemoveContainer" containerID="8d1f98c41c3fc4853c4e68bc7e91b4d8483a47efb5351d8fdb5ff5ec5ce9a38d" Jan 29 11:20:09 crc kubenswrapper[4593]: I0129 11:20:09.359712 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 29 11:20:09 crc kubenswrapper[4593]: I0129 11:20:09.459484 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:20:09 crc kubenswrapper[4593]: I0129 11:20:09.513840 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.273596 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-jfk6z"] Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.275364 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jfk6z" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.277829 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.277928 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.292234 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-jfk6z"] Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.448053 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-scripts\") pod \"nova-cell0-cell-mapping-jfk6z\" (UID: \"ecc4cd76-a47d-4691-906f-d1617455f100\") " pod="openstack/nova-cell0-cell-mapping-jfk6z" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.448437 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rlg4\" (UniqueName: \"kubernetes.io/projected/ecc4cd76-a47d-4691-906f-d1617455f100-kube-api-access-7rlg4\") pod \"nova-cell0-cell-mapping-jfk6z\" (UID: \"ecc4cd76-a47d-4691-906f-d1617455f100\") " pod="openstack/nova-cell0-cell-mapping-jfk6z" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.448469 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-jfk6z\" (UID: \"ecc4cd76-a47d-4691-906f-d1617455f100\") " pod="openstack/nova-cell0-cell-mapping-jfk6z" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.449253 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-config-data\") pod \"nova-cell0-cell-mapping-jfk6z\" (UID: \"ecc4cd76-a47d-4691-906f-d1617455f100\") " pod="openstack/nova-cell0-cell-mapping-jfk6z" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.501152 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.502891 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.512336 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.530880 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.553296 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9crfj\" (UniqueName: \"kubernetes.io/projected/fd09a34f-e8e0-45ab-8106-550772be304d-kube-api-access-9crfj\") pod \"nova-api-0\" (UID: \"fd09a34f-e8e0-45ab-8106-550772be304d\") " pod="openstack/nova-api-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.553348 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-scripts\") pod \"nova-cell0-cell-mapping-jfk6z\" (UID: \"ecc4cd76-a47d-4691-906f-d1617455f100\") " pod="openstack/nova-cell0-cell-mapping-jfk6z" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.553388 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rlg4\" (UniqueName: \"kubernetes.io/projected/ecc4cd76-a47d-4691-906f-d1617455f100-kube-api-access-7rlg4\") pod \"nova-cell0-cell-mapping-jfk6z\" (UID: \"ecc4cd76-a47d-4691-906f-d1617455f100\") " pod="openstack/nova-cell0-cell-mapping-jfk6z" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.553405 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-jfk6z\" (UID: \"ecc4cd76-a47d-4691-906f-d1617455f100\") " pod="openstack/nova-cell0-cell-mapping-jfk6z" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.553438 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd09a34f-e8e0-45ab-8106-550772be304d-logs\") pod \"nova-api-0\" (UID: \"fd09a34f-e8e0-45ab-8106-550772be304d\") " pod="openstack/nova-api-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.553470 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd09a34f-e8e0-45ab-8106-550772be304d-config-data\") pod \"nova-api-0\" (UID: \"fd09a34f-e8e0-45ab-8106-550772be304d\") " pod="openstack/nova-api-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.553512 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-config-data\") pod \"nova-cell0-cell-mapping-jfk6z\" (UID: \"ecc4cd76-a47d-4691-906f-d1617455f100\") " pod="openstack/nova-cell0-cell-mapping-jfk6z" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.554722 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd09a34f-e8e0-45ab-8106-550772be304d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd09a34f-e8e0-45ab-8106-550772be304d\") " pod="openstack/nova-api-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.562422 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-scripts\") pod \"nova-cell0-cell-mapping-jfk6z\" (UID: \"ecc4cd76-a47d-4691-906f-d1617455f100\") " pod="openstack/nova-cell0-cell-mapping-jfk6z" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.573538 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-config-data\") pod \"nova-cell0-cell-mapping-jfk6z\" (UID: \"ecc4cd76-a47d-4691-906f-d1617455f100\") " pod="openstack/nova-cell0-cell-mapping-jfk6z" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.574932 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-jfk6z\" (UID: \"ecc4cd76-a47d-4691-906f-d1617455f100\") " pod="openstack/nova-cell0-cell-mapping-jfk6z" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.615356 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rlg4\" (UniqueName: \"kubernetes.io/projected/ecc4cd76-a47d-4691-906f-d1617455f100-kube-api-access-7rlg4\") pod \"nova-cell0-cell-mapping-jfk6z\" (UID: \"ecc4cd76-a47d-4691-906f-d1617455f100\") " pod="openstack/nova-cell0-cell-mapping-jfk6z" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.634096 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.635444 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.639965 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.656859 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54be0c9a-2dea-467c-afa6-230000d9ccfa-config-data\") pod \"nova-scheduler-0\" (UID: \"54be0c9a-2dea-467c-afa6-230000d9ccfa\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.656935 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd09a34f-e8e0-45ab-8106-550772be304d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd09a34f-e8e0-45ab-8106-550772be304d\") " pod="openstack/nova-api-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.656959 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54be0c9a-2dea-467c-afa6-230000d9ccfa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"54be0c9a-2dea-467c-afa6-230000d9ccfa\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.657006 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9crfj\" (UniqueName: \"kubernetes.io/projected/fd09a34f-e8e0-45ab-8106-550772be304d-kube-api-access-9crfj\") pod \"nova-api-0\" (UID: \"fd09a34f-e8e0-45ab-8106-550772be304d\") " pod="openstack/nova-api-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.657064 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd09a34f-e8e0-45ab-8106-550772be304d-logs\") pod \"nova-api-0\" (UID: \"fd09a34f-e8e0-45ab-8106-550772be304d\") " pod="openstack/nova-api-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.657088 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7btbr\" (UniqueName: \"kubernetes.io/projected/54be0c9a-2dea-467c-afa6-230000d9ccfa-kube-api-access-7btbr\") pod \"nova-scheduler-0\" (UID: \"54be0c9a-2dea-467c-afa6-230000d9ccfa\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.657114 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd09a34f-e8e0-45ab-8106-550772be304d-config-data\") pod \"nova-api-0\" (UID: \"fd09a34f-e8e0-45ab-8106-550772be304d\") " pod="openstack/nova-api-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.660990 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd09a34f-e8e0-45ab-8106-550772be304d-logs\") pod \"nova-api-0\" (UID: \"fd09a34f-e8e0-45ab-8106-550772be304d\") " pod="openstack/nova-api-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.681413 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd09a34f-e8e0-45ab-8106-550772be304d-config-data\") pod \"nova-api-0\" (UID: \"fd09a34f-e8e0-45ab-8106-550772be304d\") " pod="openstack/nova-api-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.683503 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd09a34f-e8e0-45ab-8106-550772be304d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd09a34f-e8e0-45ab-8106-550772be304d\") " pod="openstack/nova-api-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.683952 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.740278 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9crfj\" (UniqueName: \"kubernetes.io/projected/fd09a34f-e8e0-45ab-8106-550772be304d-kube-api-access-9crfj\") pod \"nova-api-0\" (UID: \"fd09a34f-e8e0-45ab-8106-550772be304d\") " pod="openstack/nova-api-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.808939 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54be0c9a-2dea-467c-afa6-230000d9ccfa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"54be0c9a-2dea-467c-afa6-230000d9ccfa\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.809159 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7btbr\" (UniqueName: \"kubernetes.io/projected/54be0c9a-2dea-467c-afa6-230000d9ccfa-kube-api-access-7btbr\") pod \"nova-scheduler-0\" (UID: \"54be0c9a-2dea-467c-afa6-230000d9ccfa\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.809254 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54be0c9a-2dea-467c-afa6-230000d9ccfa-config-data\") pod \"nova-scheduler-0\" (UID: \"54be0c9a-2dea-467c-afa6-230000d9ccfa\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.828597 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.830738 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54be0c9a-2dea-467c-afa6-230000d9ccfa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"54be0c9a-2dea-467c-afa6-230000d9ccfa\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.836431 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54be0c9a-2dea-467c-afa6-230000d9ccfa-config-data\") pod \"nova-scheduler-0\" (UID: \"54be0c9a-2dea-467c-afa6-230000d9ccfa\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.898372 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jfk6z" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.899779 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7btbr\" (UniqueName: \"kubernetes.io/projected/54be0c9a-2dea-467c-afa6-230000d9ccfa-kube-api-access-7btbr\") pod \"nova-scheduler-0\" (UID: \"54be0c9a-2dea-467c-afa6-230000d9ccfa\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.981135 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.986317 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.044990 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78sj8\" (UniqueName: \"kubernetes.io/projected/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-kube-api-access-78sj8\") pod \"nova-metadata-0\" (UID: \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\") " pod="openstack/nova-metadata-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.045043 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-logs\") pod \"nova-metadata-0\" (UID: \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\") " pod="openstack/nova-metadata-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.045108 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-config-data\") pod \"nova-metadata-0\" (UID: \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\") " pod="openstack/nova-metadata-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.045216 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\") " pod="openstack/nova-metadata-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.049529 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.055744 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.125871 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.134685 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.135167 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.146454 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.150612 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\") " pod="openstack/nova-metadata-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.181072 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78sj8\" (UniqueName: \"kubernetes.io/projected/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-kube-api-access-78sj8\") pod \"nova-metadata-0\" (UID: \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\") " pod="openstack/nova-metadata-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.181961 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-logs\") pod \"nova-metadata-0\" (UID: \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\") " pod="openstack/nova-metadata-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.170143 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\") " pod="openstack/nova-metadata-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.151149 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:20:11 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:20:11 crc kubenswrapper[4593]: > Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.182422 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-config-data\") pod \"nova-metadata-0\" (UID: \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\") " pod="openstack/nova-metadata-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.182608 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-logs\") pod \"nova-metadata-0\" (UID: \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\") " pod="openstack/nova-metadata-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.174591 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.213127 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-bsx9x"] Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.214786 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-config-data\") pod \"nova-metadata-0\" (UID: \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\") " pod="openstack/nova-metadata-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.214974 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.232495 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78sj8\" (UniqueName: \"kubernetes.io/projected/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-kube-api-access-78sj8\") pod \"nova-metadata-0\" (UID: \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\") " pod="openstack/nova-metadata-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.278272 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-bsx9x"] Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.288050 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ppsw\" (UniqueName: \"kubernetes.io/projected/697e4dbe-9b00-4891-9456-f76cb9642401-kube-api-access-7ppsw\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.288336 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.288402 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-config\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.288530 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.288560 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.288877 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.288944 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kpzt\" (UniqueName: \"kubernetes.io/projected/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-kube-api-access-5kpzt\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.289023 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.289048 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-dns-svc\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.368840 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.411120 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ppsw\" (UniqueName: \"kubernetes.io/projected/697e4dbe-9b00-4891-9456-f76cb9642401-kube-api-access-7ppsw\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.426137 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.426320 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-config\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.426719 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.426770 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.427004 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.427039 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kpzt\" (UniqueName: \"kubernetes.io/projected/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-kube-api-access-5kpzt\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.427102 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.427126 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-dns-svc\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.428476 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-dns-svc\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.429474 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.430066 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-config\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.430762 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.431328 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.440292 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ppsw\" (UniqueName: \"kubernetes.io/projected/697e4dbe-9b00-4891-9456-f76cb9642401-kube-api-access-7ppsw\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.442361 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.451330 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.486659 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kpzt\" (UniqueName: \"kubernetes.io/projected/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-kube-api-access-5kpzt\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.576815 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:12 crc kubenswrapper[4593]: I0129 11:20:12.010179 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:12 crc kubenswrapper[4593]: I0129 11:20:12.089819 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:20:12 crc kubenswrapper[4593]: W0129 11:20:12.150745 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd09a34f_e8e0_45ab_8106_550772be304d.slice/crio-d33e85a542f161cdeff330ae3f58078f90938b3f287467787015c6695fd198e9 WatchSource:0}: Error finding container d33e85a542f161cdeff330ae3f58078f90938b3f287467787015c6695fd198e9: Status 404 returned error can't find the container with id d33e85a542f161cdeff330ae3f58078f90938b3f287467787015c6695fd198e9 Jan 29 11:20:12 crc kubenswrapper[4593]: I0129 11:20:12.305274 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-jfk6z"] Jan 29 11:20:12 crc kubenswrapper[4593]: I0129 11:20:12.607299 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:20:12 crc kubenswrapper[4593]: I0129 11:20:12.686508 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:20:12 crc kubenswrapper[4593]: W0129 11:20:12.747212 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ea9a9cf_fb59_4fec_a11c_3a228320cf32.slice/crio-13727362b708c7d8f4bdedf7112159bac510e7dd8fcbc27ff1f8ffc6f3f09587 WatchSource:0}: Error finding container 13727362b708c7d8f4bdedf7112159bac510e7dd8fcbc27ff1f8ffc6f3f09587: Status 404 returned error can't find the container with id 13727362b708c7d8f4bdedf7112159bac510e7dd8fcbc27ff1f8ffc6f3f09587 Jan 29 11:20:12 crc kubenswrapper[4593]: I0129 11:20:12.780978 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-bsx9x"] Jan 29 11:20:12 crc kubenswrapper[4593]: I0129 11:20:12.906953 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.068023 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wc9fh"] Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.069404 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wc9fh" Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.077365 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.077536 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.098524 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wc9fh"] Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.135068 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ea9a9cf-fb59-4fec-a11c-3a228320cf32","Type":"ContainerStarted","Data":"13727362b708c7d8f4bdedf7112159bac510e7dd8fcbc27ff1f8ffc6f3f09587"} Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.137043 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" event={"ID":"697e4dbe-9b00-4891-9456-f76cb9642401","Type":"ContainerStarted","Data":"7eb448007e7f2f259e7551ed6226b778b13ff57e3f9a0c2ec212e1fb5e5be79a"} Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.138311 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8","Type":"ContainerStarted","Data":"afca7bf4b299e69d695725ee22c529f3ea659c864ce859245236b6ced858cb90"} Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.139280 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd09a34f-e8e0-45ab-8106-550772be304d","Type":"ContainerStarted","Data":"d33e85a542f161cdeff330ae3f58078f90938b3f287467787015c6695fd198e9"} Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.140179 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"54be0c9a-2dea-467c-afa6-230000d9ccfa","Type":"ContainerStarted","Data":"7a4e7135bde371deba18f2e2d879e899cf14dcee993b634bcfe74d5b004e721e"} Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.141806 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jfk6z" event={"ID":"ecc4cd76-a47d-4691-906f-d1617455f100","Type":"ContainerStarted","Data":"96bdd94d7fe01d27f9002652fb0e024d5e4216b747eecd5f1013e14f7c20a7f7"} Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.141850 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jfk6z" event={"ID":"ecc4cd76-a47d-4691-906f-d1617455f100","Type":"ContainerStarted","Data":"40b85745aaf0431c0c3b188b6e870f9ab2cee2968144160c13e9e9930341c6fc"} Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.158877 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wc9fh\" (UID: \"c4d30b0b-741b-4275-bcd3-65f27a294d54\") " pod="openstack/nova-cell1-conductor-db-sync-wc9fh" Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.158946 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-scripts\") pod \"nova-cell1-conductor-db-sync-wc9fh\" (UID: \"c4d30b0b-741b-4275-bcd3-65f27a294d54\") " pod="openstack/nova-cell1-conductor-db-sync-wc9fh" Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.159168 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-config-data\") pod \"nova-cell1-conductor-db-sync-wc9fh\" (UID: \"c4d30b0b-741b-4275-bcd3-65f27a294d54\") " pod="openstack/nova-cell1-conductor-db-sync-wc9fh" Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.159668 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm2nz\" (UniqueName: \"kubernetes.io/projected/c4d30b0b-741b-4275-bcd3-65f27a294d54-kube-api-access-cm2nz\") pod \"nova-cell1-conductor-db-sync-wc9fh\" (UID: \"c4d30b0b-741b-4275-bcd3-65f27a294d54\") " pod="openstack/nova-cell1-conductor-db-sync-wc9fh" Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.167428 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-jfk6z" podStartSLOduration=3.1674012400000002 podStartE2EDuration="3.16740124s" podCreationTimestamp="2026-01-29 11:20:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:20:13.159695121 +0000 UTC m=+1279.032729322" watchObservedRunningTime="2026-01-29 11:20:13.16740124 +0000 UTC m=+1279.040435431" Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.264400 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-config-data\") pod \"nova-cell1-conductor-db-sync-wc9fh\" (UID: \"c4d30b0b-741b-4275-bcd3-65f27a294d54\") " pod="openstack/nova-cell1-conductor-db-sync-wc9fh" Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.264611 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm2nz\" (UniqueName: \"kubernetes.io/projected/c4d30b0b-741b-4275-bcd3-65f27a294d54-kube-api-access-cm2nz\") pod \"nova-cell1-conductor-db-sync-wc9fh\" (UID: \"c4d30b0b-741b-4275-bcd3-65f27a294d54\") " pod="openstack/nova-cell1-conductor-db-sync-wc9fh" Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.264745 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wc9fh\" (UID: \"c4d30b0b-741b-4275-bcd3-65f27a294d54\") " pod="openstack/nova-cell1-conductor-db-sync-wc9fh" Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.264827 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-scripts\") pod \"nova-cell1-conductor-db-sync-wc9fh\" (UID: \"c4d30b0b-741b-4275-bcd3-65f27a294d54\") " pod="openstack/nova-cell1-conductor-db-sync-wc9fh" Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.269962 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-config-data\") pod \"nova-cell1-conductor-db-sync-wc9fh\" (UID: \"c4d30b0b-741b-4275-bcd3-65f27a294d54\") " pod="openstack/nova-cell1-conductor-db-sync-wc9fh" Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.270592 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wc9fh\" (UID: \"c4d30b0b-741b-4275-bcd3-65f27a294d54\") " pod="openstack/nova-cell1-conductor-db-sync-wc9fh" Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.274191 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-scripts\") pod \"nova-cell1-conductor-db-sync-wc9fh\" (UID: \"c4d30b0b-741b-4275-bcd3-65f27a294d54\") " pod="openstack/nova-cell1-conductor-db-sync-wc9fh" Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.288344 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm2nz\" (UniqueName: \"kubernetes.io/projected/c4d30b0b-741b-4275-bcd3-65f27a294d54-kube-api-access-cm2nz\") pod \"nova-cell1-conductor-db-sync-wc9fh\" (UID: \"c4d30b0b-741b-4275-bcd3-65f27a294d54\") " pod="openstack/nova-cell1-conductor-db-sync-wc9fh" Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.392348 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wc9fh" Jan 29 11:20:14 crc kubenswrapper[4593]: I0129 11:20:14.200858 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wc9fh"] Jan 29 11:20:14 crc kubenswrapper[4593]: I0129 11:20:14.232058 4593 generic.go:334] "Generic (PLEG): container finished" podID="697e4dbe-9b00-4891-9456-f76cb9642401" containerID="7393be6f52eedddb8f2e44100a437ddd9c4a6aceb5605fe268b7dc5e484c61b6" exitCode=0 Jan 29 11:20:14 crc kubenswrapper[4593]: I0129 11:20:14.233880 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" event={"ID":"697e4dbe-9b00-4891-9456-f76cb9642401","Type":"ContainerDied","Data":"7393be6f52eedddb8f2e44100a437ddd9c4a6aceb5605fe268b7dc5e484c61b6"} Jan 29 11:20:14 crc kubenswrapper[4593]: I0129 11:20:14.247285 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:20:14 crc kubenswrapper[4593]: I0129 11:20:14.448475 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-fbf566cdb-kbm9z"] Jan 29 11:20:14 crc kubenswrapper[4593]: I0129 11:20:14.449654 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon-log" containerID="cri-o://79e5fad4ce8a136539fe157f20b007cd9dda01813dc5bd26b79f98167ce8f3c8" gracePeriod=30 Jan 29 11:20:14 crc kubenswrapper[4593]: I0129 11:20:14.450047 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" containerID="cri-o://3d261a3c68b7921bd914d1e7f66292aa43d7dcf78e137210f6cac9b61a927909" gracePeriod=30 Jan 29 11:20:14 crc kubenswrapper[4593]: I0129 11:20:14.479803 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:20:14 crc kubenswrapper[4593]: I0129 11:20:14.693011 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 11:20:15 crc kubenswrapper[4593]: I0129 11:20:15.289887 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wc9fh" event={"ID":"c4d30b0b-741b-4275-bcd3-65f27a294d54","Type":"ContainerStarted","Data":"becc277c4dab17e63d11203d4fe1da3af35724523a182bc72abe031b3a628c8a"} Jan 29 11:20:15 crc kubenswrapper[4593]: I0129 11:20:15.290233 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wc9fh" event={"ID":"c4d30b0b-741b-4275-bcd3-65f27a294d54","Type":"ContainerStarted","Data":"8dc46203d3c6c5d1cde15f072717e4362e4df9ca33b0077c8bfb3bc44346b805"} Jan 29 11:20:15 crc kubenswrapper[4593]: I0129 11:20:15.379016 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" event={"ID":"697e4dbe-9b00-4891-9456-f76cb9642401","Type":"ContainerStarted","Data":"5c3d893d50de695f2752e97704ce1977c263a00d43a535d7cade0a1f98508eeb"} Jan 29 11:20:15 crc kubenswrapper[4593]: I0129 11:20:15.379930 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-wc9fh" podStartSLOduration=2.379911927 podStartE2EDuration="2.379911927s" podCreationTimestamp="2026-01-29 11:20:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:20:15.314156396 +0000 UTC m=+1281.187190597" watchObservedRunningTime="2026-01-29 11:20:15.379911927 +0000 UTC m=+1281.252946118" Jan 29 11:20:15 crc kubenswrapper[4593]: I0129 11:20:15.380061 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:15 crc kubenswrapper[4593]: I0129 11:20:15.410965 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" podStartSLOduration=4.410933827 podStartE2EDuration="4.410933827s" podCreationTimestamp="2026-01-29 11:20:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:20:15.406409604 +0000 UTC m=+1281.279443795" watchObservedRunningTime="2026-01-29 11:20:15.410933827 +0000 UTC m=+1281.283968028" Jan 29 11:20:16 crc kubenswrapper[4593]: I0129 11:20:16.235777 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:20:16 crc kubenswrapper[4593]: I0129 11:20:16.255921 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 11:20:19 crc kubenswrapper[4593]: I0129 11:20:19.445194 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:43798->10.217.0.146:8443: read: connection reset by peer" Jan 29 11:20:19 crc kubenswrapper[4593]: I0129 11:20:19.446497 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 29 11:20:20 crc kubenswrapper[4593]: I0129 11:20:20.433358 4593 generic.go:334] "Generic (PLEG): container finished" podID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerID="3d261a3c68b7921bd914d1e7f66292aa43d7dcf78e137210f6cac9b61a927909" exitCode=0 Jan 29 11:20:20 crc kubenswrapper[4593]: I0129 11:20:20.433442 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf566cdb-kbm9z" event={"ID":"b9761a4f-8669-4e74-9f8e-ed8b9778af11","Type":"ContainerDied","Data":"3d261a3c68b7921bd914d1e7f66292aa43d7dcf78e137210f6cac9b61a927909"} Jan 29 11:20:20 crc kubenswrapper[4593]: I0129 11:20:20.433996 4593 scope.go:117] "RemoveContainer" containerID="d530af95b0eed70c00fd912ebcf7a37fa3a57fbb18ac1239a4c7320a7f27c6af" Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.151560 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:20:21 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:20:21 crc kubenswrapper[4593]: > Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.444354 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"54be0c9a-2dea-467c-afa6-230000d9ccfa","Type":"ContainerStarted","Data":"660df2719e4927e909a269c0af10ce5b75a1a0017c3734f8e647f89f3520914c"} Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.451733 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ea9a9cf-fb59-4fec-a11c-3a228320cf32","Type":"ContainerStarted","Data":"453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab"} Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.451789 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ea9a9cf-fb59-4fec-a11c-3a228320cf32","Type":"ContainerStarted","Data":"958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325"} Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.451893 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8ea9a9cf-fb59-4fec-a11c-3a228320cf32" containerName="nova-metadata-log" containerID="cri-o://958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325" gracePeriod=30 Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.452004 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8ea9a9cf-fb59-4fec-a11c-3a228320cf32" containerName="nova-metadata-metadata" containerID="cri-o://453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab" gracePeriod=30 Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.458272 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="d3bc8fe6-dc7c-4731-902d-67d12a0bfef8" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://e73184b2646dc788b31f373cb46f214041bd4afe8f28004c1f0ce17b08c20d69" gracePeriod=30 Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.458386 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8","Type":"ContainerStarted","Data":"e73184b2646dc788b31f373cb46f214041bd4afe8f28004c1f0ce17b08c20d69"} Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.463572 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd09a34f-e8e0-45ab-8106-550772be304d","Type":"ContainerStarted","Data":"c81b7688d239bdd13897f418ffeec3bb6a0ec1aa62a8c986ce8bd188ebb40d6e"} Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.463649 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd09a34f-e8e0-45ab-8106-550772be304d","Type":"ContainerStarted","Data":"cae1e9ac5b4b49b857f39e56a9ed6ae24fecf3dc4a8a8ec02b94e52110cb7594"} Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.473172 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.09940852 podStartE2EDuration="11.473150526s" podCreationTimestamp="2026-01-29 11:20:10 +0000 UTC" firstStartedPulling="2026-01-29 11:20:12.63858893 +0000 UTC m=+1278.511623121" lastFinishedPulling="2026-01-29 11:20:20.012330936 +0000 UTC m=+1285.885365127" observedRunningTime="2026-01-29 11:20:21.472606762 +0000 UTC m=+1287.345640953" watchObservedRunningTime="2026-01-29 11:20:21.473150526 +0000 UTC m=+1287.346184717" Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.512216 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.620172 podStartE2EDuration="11.512191153s" podCreationTimestamp="2026-01-29 11:20:10 +0000 UTC" firstStartedPulling="2026-01-29 11:20:12.171418568 +0000 UTC m=+1278.044452759" lastFinishedPulling="2026-01-29 11:20:20.063437721 +0000 UTC m=+1285.936471912" observedRunningTime="2026-01-29 11:20:21.493352114 +0000 UTC m=+1287.366386305" watchObservedRunningTime="2026-01-29 11:20:21.512191153 +0000 UTC m=+1287.385225344" Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.524273 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.360664165 podStartE2EDuration="11.5242501s" podCreationTimestamp="2026-01-29 11:20:10 +0000 UTC" firstStartedPulling="2026-01-29 11:20:12.930212577 +0000 UTC m=+1278.803246768" lastFinishedPulling="2026-01-29 11:20:20.093798512 +0000 UTC m=+1285.966832703" observedRunningTime="2026-01-29 11:20:21.52127541 +0000 UTC m=+1287.394309621" watchObservedRunningTime="2026-01-29 11:20:21.5242501 +0000 UTC m=+1287.397284291" Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.548283 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.243103851 podStartE2EDuration="11.54825071s" podCreationTimestamp="2026-01-29 11:20:10 +0000 UTC" firstStartedPulling="2026-01-29 11:20:12.754818827 +0000 UTC m=+1278.627853018" lastFinishedPulling="2026-01-29 11:20:20.059965686 +0000 UTC m=+1285.932999877" observedRunningTime="2026-01-29 11:20:21.53679485 +0000 UTC m=+1287.409829041" watchObservedRunningTime="2026-01-29 11:20:21.54825071 +0000 UTC m=+1287.421284911" Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.578826 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.689530 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9hb8w"] Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.690217 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" podUID="7aadd015-f714-41cf-b532-396d9f5f3946" containerName="dnsmasq-dns" containerID="cri-o://71929b9f4271d72dbfcb871f40c2a2b36bba6325c1864b1f8ec830759d7bd059" gracePeriod=10 Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.012317 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.446086 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.506395 4593 generic.go:334] "Generic (PLEG): container finished" podID="7aadd015-f714-41cf-b532-396d9f5f3946" containerID="71929b9f4271d72dbfcb871f40c2a2b36bba6325c1864b1f8ec830759d7bd059" exitCode=0 Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.506761 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" event={"ID":"7aadd015-f714-41cf-b532-396d9f5f3946","Type":"ContainerDied","Data":"71929b9f4271d72dbfcb871f40c2a2b36bba6325c1864b1f8ec830759d7bd059"} Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.508883 4593 generic.go:334] "Generic (PLEG): container finished" podID="8ea9a9cf-fb59-4fec-a11c-3a228320cf32" containerID="453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab" exitCode=0 Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.508929 4593 generic.go:334] "Generic (PLEG): container finished" podID="8ea9a9cf-fb59-4fec-a11c-3a228320cf32" containerID="958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325" exitCode=143 Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.510538 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.511614 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ea9a9cf-fb59-4fec-a11c-3a228320cf32","Type":"ContainerDied","Data":"453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab"} Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.511686 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ea9a9cf-fb59-4fec-a11c-3a228320cf32","Type":"ContainerDied","Data":"958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325"} Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.511706 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ea9a9cf-fb59-4fec-a11c-3a228320cf32","Type":"ContainerDied","Data":"13727362b708c7d8f4bdedf7112159bac510e7dd8fcbc27ff1f8ffc6f3f09587"} Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.511726 4593 scope.go:117] "RemoveContainer" containerID="453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.567159 4593 scope.go:117] "RemoveContainer" containerID="958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.631576 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-combined-ca-bundle\") pod \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\" (UID: \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\") " Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.631619 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-logs\") pod \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\" (UID: \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\") " Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.631649 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-config-data\") pod \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\" (UID: \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\") " Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.631693 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78sj8\" (UniqueName: \"kubernetes.io/projected/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-kube-api-access-78sj8\") pod \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\" (UID: \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\") " Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.636207 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-logs" (OuterVolumeSpecName: "logs") pod "8ea9a9cf-fb59-4fec-a11c-3a228320cf32" (UID: "8ea9a9cf-fb59-4fec-a11c-3a228320cf32"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.653354 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-kube-api-access-78sj8" (OuterVolumeSpecName: "kube-api-access-78sj8") pod "8ea9a9cf-fb59-4fec-a11c-3a228320cf32" (UID: "8ea9a9cf-fb59-4fec-a11c-3a228320cf32"). InnerVolumeSpecName "kube-api-access-78sj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.696777 4593 scope.go:117] "RemoveContainer" containerID="453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab" Jan 29 11:20:22 crc kubenswrapper[4593]: E0129 11:20:22.698897 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab\": container with ID starting with 453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab not found: ID does not exist" containerID="453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.698929 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab"} err="failed to get container status \"453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab\": rpc error: code = NotFound desc = could not find container \"453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab\": container with ID starting with 453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab not found: ID does not exist" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.698951 4593 scope.go:117] "RemoveContainer" containerID="958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325" Jan 29 11:20:22 crc kubenswrapper[4593]: E0129 11:20:22.703103 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325\": container with ID starting with 958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325 not found: ID does not exist" containerID="958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.703145 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325"} err="failed to get container status \"958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325\": rpc error: code = NotFound desc = could not find container \"958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325\": container with ID starting with 958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325 not found: ID does not exist" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.703171 4593 scope.go:117] "RemoveContainer" containerID="453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.707814 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab"} err="failed to get container status \"453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab\": rpc error: code = NotFound desc = could not find container \"453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab\": container with ID starting with 453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab not found: ID does not exist" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.707857 4593 scope.go:117] "RemoveContainer" containerID="958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.708153 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325"} err="failed to get container status \"958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325\": rpc error: code = NotFound desc = could not find container \"958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325\": container with ID starting with 958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325 not found: ID does not exist" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.713742 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ea9a9cf-fb59-4fec-a11c-3a228320cf32" (UID: "8ea9a9cf-fb59-4fec-a11c-3a228320cf32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.717082 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.725833 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-config-data" (OuterVolumeSpecName: "config-data") pod "8ea9a9cf-fb59-4fec-a11c-3a228320cf32" (UID: "8ea9a9cf-fb59-4fec-a11c-3a228320cf32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.737341 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-ovsdbserver-sb\") pod \"7aadd015-f714-41cf-b532-396d9f5f3946\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.737484 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-config\") pod \"7aadd015-f714-41cf-b532-396d9f5f3946\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.737540 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-dns-swift-storage-0\") pod \"7aadd015-f714-41cf-b532-396d9f5f3946\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.737644 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-dns-svc\") pod \"7aadd015-f714-41cf-b532-396d9f5f3946\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.737757 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-ovsdbserver-nb\") pod \"7aadd015-f714-41cf-b532-396d9f5f3946\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.737804 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbtth\" (UniqueName: \"kubernetes.io/projected/7aadd015-f714-41cf-b532-396d9f5f3946-kube-api-access-xbtth\") pod \"7aadd015-f714-41cf-b532-396d9f5f3946\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.738283 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.738308 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.738320 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.738331 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78sj8\" (UniqueName: \"kubernetes.io/projected/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-kube-api-access-78sj8\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.755752 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aadd015-f714-41cf-b532-396d9f5f3946-kube-api-access-xbtth" (OuterVolumeSpecName: "kube-api-access-xbtth") pod "7aadd015-f714-41cf-b532-396d9f5f3946" (UID: "7aadd015-f714-41cf-b532-396d9f5f3946"). InnerVolumeSpecName "kube-api-access-xbtth". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.841340 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbtth\" (UniqueName: \"kubernetes.io/projected/7aadd015-f714-41cf-b532-396d9f5f3946-kube-api-access-xbtth\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.852255 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7aadd015-f714-41cf-b532-396d9f5f3946" (UID: "7aadd015-f714-41cf-b532-396d9f5f3946"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.878365 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-config" (OuterVolumeSpecName: "config") pod "7aadd015-f714-41cf-b532-396d9f5f3946" (UID: "7aadd015-f714-41cf-b532-396d9f5f3946"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.887738 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7aadd015-f714-41cf-b532-396d9f5f3946" (UID: "7aadd015-f714-41cf-b532-396d9f5f3946"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.908294 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7aadd015-f714-41cf-b532-396d9f5f3946" (UID: "7aadd015-f714-41cf-b532-396d9f5f3946"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.909183 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7aadd015-f714-41cf-b532-396d9f5f3946" (UID: "7aadd015-f714-41cf-b532-396d9f5f3946"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.942419 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.942449 4593 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.942462 4593 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.942471 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.942480 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.973215 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.996802 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.063731 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:20:23 crc kubenswrapper[4593]: E0129 11:20:23.064215 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ea9a9cf-fb59-4fec-a11c-3a228320cf32" containerName="nova-metadata-metadata" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.064238 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ea9a9cf-fb59-4fec-a11c-3a228320cf32" containerName="nova-metadata-metadata" Jan 29 11:20:23 crc kubenswrapper[4593]: E0129 11:20:23.064254 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ea9a9cf-fb59-4fec-a11c-3a228320cf32" containerName="nova-metadata-log" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.064261 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ea9a9cf-fb59-4fec-a11c-3a228320cf32" containerName="nova-metadata-log" Jan 29 11:20:23 crc kubenswrapper[4593]: E0129 11:20:23.064295 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aadd015-f714-41cf-b532-396d9f5f3946" containerName="dnsmasq-dns" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.064302 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aadd015-f714-41cf-b532-396d9f5f3946" containerName="dnsmasq-dns" Jan 29 11:20:23 crc kubenswrapper[4593]: E0129 11:20:23.064319 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aadd015-f714-41cf-b532-396d9f5f3946" containerName="init" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.064324 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aadd015-f714-41cf-b532-396d9f5f3946" containerName="init" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.064489 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ea9a9cf-fb59-4fec-a11c-3a228320cf32" containerName="nova-metadata-log" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.064505 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aadd015-f714-41cf-b532-396d9f5f3946" containerName="dnsmasq-dns" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.064512 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ea9a9cf-fb59-4fec-a11c-3a228320cf32" containerName="nova-metadata-metadata" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.065600 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.066884 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.069334 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.070056 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.103143 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ea9a9cf-fb59-4fec-a11c-3a228320cf32" path="/var/lib/kubelet/pods/8ea9a9cf-fb59-4fec-a11c-3a228320cf32/volumes" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.249316 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.249801 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8jgb\" (UniqueName: \"kubernetes.io/projected/78c17a08-712a-47fb-a1eb-f26be532ce98-kube-api-access-v8jgb\") pod \"nova-metadata-0\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.251103 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-config-data\") pod \"nova-metadata-0\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.251345 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78c17a08-712a-47fb-a1eb-f26be532ce98-logs\") pod \"nova-metadata-0\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.251552 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.353239 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78c17a08-712a-47fb-a1eb-f26be532ce98-logs\") pod \"nova-metadata-0\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.353871 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.354049 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.354182 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8jgb\" (UniqueName: \"kubernetes.io/projected/78c17a08-712a-47fb-a1eb-f26be532ce98-kube-api-access-v8jgb\") pod \"nova-metadata-0\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.354432 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-config-data\") pod \"nova-metadata-0\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.355139 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78c17a08-712a-47fb-a1eb-f26be532ce98-logs\") pod \"nova-metadata-0\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.364478 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.366443 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-config-data\") pod \"nova-metadata-0\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.381047 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.389618 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8jgb\" (UniqueName: \"kubernetes.io/projected/78c17a08-712a-47fb-a1eb-f26be532ce98-kube-api-access-v8jgb\") pod \"nova-metadata-0\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.400330 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.543850 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" event={"ID":"7aadd015-f714-41cf-b532-396d9f5f3946","Type":"ContainerDied","Data":"f371f618c4302fbf0bf3244208980a3b33a4e263434fd709be03f076a3036627"} Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.543944 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.544256 4593 scope.go:117] "RemoveContainer" containerID="71929b9f4271d72dbfcb871f40c2a2b36bba6325c1864b1f8ec830759d7bd059" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.598673 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9hb8w"] Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.612065 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9hb8w"] Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.702916 4593 scope.go:117] "RemoveContainer" containerID="d7d10b40887ad7cb3695100bfd7e2e09a54897e25591da02ac46e6c0d27cc415" Jan 29 11:20:24 crc kubenswrapper[4593]: I0129 11:20:24.150414 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:20:24 crc kubenswrapper[4593]: I0129 11:20:24.586231 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"78c17a08-712a-47fb-a1eb-f26be532ce98","Type":"ContainerStarted","Data":"35c4fb91bfd0ce4ebd4422950ffc22b955b4cb92b4cb7a470281bd92f4f21b4d"} Jan 29 11:20:24 crc kubenswrapper[4593]: I0129 11:20:24.586562 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"78c17a08-712a-47fb-a1eb-f26be532ce98","Type":"ContainerStarted","Data":"4cefe4364c2588402ec5dd748f4b5e3fc4e65f94d005770bf05acdcf92ebff76"} Jan 29 11:20:24 crc kubenswrapper[4593]: I0129 11:20:24.911486 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 29 11:20:25 crc kubenswrapper[4593]: I0129 11:20:25.090511 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aadd015-f714-41cf-b532-396d9f5f3946" path="/var/lib/kubelet/pods/7aadd015-f714-41cf-b532-396d9f5f3946/volumes" Jan 29 11:20:25 crc kubenswrapper[4593]: I0129 11:20:25.600662 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"78c17a08-712a-47fb-a1eb-f26be532ce98","Type":"ContainerStarted","Data":"2a149a3cd3c416e532f08f09e3efa6137160f0dec84f0e59b848968641500164"} Jan 29 11:20:25 crc kubenswrapper[4593]: I0129 11:20:25.628461 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.628435835 podStartE2EDuration="3.628435835s" podCreationTimestamp="2026-01-29 11:20:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:20:25.624054177 +0000 UTC m=+1291.497088368" watchObservedRunningTime="2026-01-29 11:20:25.628435835 +0000 UTC m=+1291.501470036" Jan 29 11:20:26 crc kubenswrapper[4593]: I0129 11:20:26.175031 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 11:20:28 crc kubenswrapper[4593]: I0129 11:20:28.400987 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 11:20:28 crc kubenswrapper[4593]: I0129 11:20:28.404791 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 11:20:28 crc kubenswrapper[4593]: I0129 11:20:28.643776 4593 generic.go:334] "Generic (PLEG): container finished" podID="ecc4cd76-a47d-4691-906f-d1617455f100" containerID="96bdd94d7fe01d27f9002652fb0e024d5e4216b747eecd5f1013e14f7c20a7f7" exitCode=0 Jan 29 11:20:28 crc kubenswrapper[4593]: I0129 11:20:28.643861 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jfk6z" event={"ID":"ecc4cd76-a47d-4691-906f-d1617455f100","Type":"ContainerDied","Data":"96bdd94d7fe01d27f9002652fb0e024d5e4216b747eecd5f1013e14f7c20a7f7"} Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.201452 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jfk6z" Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.401565 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-combined-ca-bundle\") pod \"ecc4cd76-a47d-4691-906f-d1617455f100\" (UID: \"ecc4cd76-a47d-4691-906f-d1617455f100\") " Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.401704 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-config-data\") pod \"ecc4cd76-a47d-4691-906f-d1617455f100\" (UID: \"ecc4cd76-a47d-4691-906f-d1617455f100\") " Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.401736 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rlg4\" (UniqueName: \"kubernetes.io/projected/ecc4cd76-a47d-4691-906f-d1617455f100-kube-api-access-7rlg4\") pod \"ecc4cd76-a47d-4691-906f-d1617455f100\" (UID: \"ecc4cd76-a47d-4691-906f-d1617455f100\") " Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.401820 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-scripts\") pod \"ecc4cd76-a47d-4691-906f-d1617455f100\" (UID: \"ecc4cd76-a47d-4691-906f-d1617455f100\") " Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.421117 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecc4cd76-a47d-4691-906f-d1617455f100-kube-api-access-7rlg4" (OuterVolumeSpecName: "kube-api-access-7rlg4") pod "ecc4cd76-a47d-4691-906f-d1617455f100" (UID: "ecc4cd76-a47d-4691-906f-d1617455f100"). InnerVolumeSpecName "kube-api-access-7rlg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.433375 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-scripts" (OuterVolumeSpecName: "scripts") pod "ecc4cd76-a47d-4691-906f-d1617455f100" (UID: "ecc4cd76-a47d-4691-906f-d1617455f100"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.451009 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-config-data" (OuterVolumeSpecName: "config-data") pod "ecc4cd76-a47d-4691-906f-d1617455f100" (UID: "ecc4cd76-a47d-4691-906f-d1617455f100"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.451782 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ecc4cd76-a47d-4691-906f-d1617455f100" (UID: "ecc4cd76-a47d-4691-906f-d1617455f100"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.507660 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.507701 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.507714 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rlg4\" (UniqueName: \"kubernetes.io/projected/ecc4cd76-a47d-4691-906f-d1617455f100-kube-api-access-7rlg4\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.507729 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.670305 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jfk6z" event={"ID":"ecc4cd76-a47d-4691-906f-d1617455f100","Type":"ContainerDied","Data":"40b85745aaf0431c0c3b188b6e870f9ab2cee2968144160c13e9e9930341c6fc"} Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.670696 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40b85745aaf0431c0c3b188b6e870f9ab2cee2968144160c13e9e9930341c6fc" Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.670382 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jfk6z" Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.831343 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.831405 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.863623 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.863842 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="54be0c9a-2dea-467c-afa6-230000d9ccfa" containerName="nova-scheduler-scheduler" containerID="cri-o://660df2719e4927e909a269c0af10ce5b75a1a0017c3734f8e647f89f3520914c" gracePeriod=30 Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.874569 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.913425 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.913722 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="78c17a08-712a-47fb-a1eb-f26be532ce98" containerName="nova-metadata-log" containerID="cri-o://35c4fb91bfd0ce4ebd4422950ffc22b955b4cb92b4cb7a470281bd92f4f21b4d" gracePeriod=30 Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.913889 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="78c17a08-712a-47fb-a1eb-f26be532ce98" containerName="nova-metadata-metadata" containerID="cri-o://2a149a3cd3c416e532f08f09e3efa6137160f0dec84f0e59b848968641500164" gracePeriod=30 Jan 29 11:20:31 crc kubenswrapper[4593]: I0129 11:20:31.111572 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:20:31 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:20:31 crc kubenswrapper[4593]: > Jan 29 11:20:31 crc kubenswrapper[4593]: I0129 11:20:31.686331 4593 generic.go:334] "Generic (PLEG): container finished" podID="78c17a08-712a-47fb-a1eb-f26be532ce98" containerID="2a149a3cd3c416e532f08f09e3efa6137160f0dec84f0e59b848968641500164" exitCode=0 Jan 29 11:20:31 crc kubenswrapper[4593]: I0129 11:20:31.686374 4593 generic.go:334] "Generic (PLEG): container finished" podID="78c17a08-712a-47fb-a1eb-f26be532ce98" containerID="35c4fb91bfd0ce4ebd4422950ffc22b955b4cb92b4cb7a470281bd92f4f21b4d" exitCode=143 Jan 29 11:20:31 crc kubenswrapper[4593]: I0129 11:20:31.686488 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"78c17a08-712a-47fb-a1eb-f26be532ce98","Type":"ContainerDied","Data":"2a149a3cd3c416e532f08f09e3efa6137160f0dec84f0e59b848968641500164"} Jan 29 11:20:31 crc kubenswrapper[4593]: I0129 11:20:31.686555 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"78c17a08-712a-47fb-a1eb-f26be532ce98","Type":"ContainerDied","Data":"35c4fb91bfd0ce4ebd4422950ffc22b955b4cb92b4cb7a470281bd92f4f21b4d"} Jan 29 11:20:31 crc kubenswrapper[4593]: I0129 11:20:31.686617 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fd09a34f-e8e0-45ab-8106-550772be304d" containerName="nova-api-log" containerID="cri-o://cae1e9ac5b4b49b857f39e56a9ed6ae24fecf3dc4a8a8ec02b94e52110cb7594" gracePeriod=30 Jan 29 11:20:31 crc kubenswrapper[4593]: I0129 11:20:31.687120 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fd09a34f-e8e0-45ab-8106-550772be304d" containerName="nova-api-api" containerID="cri-o://c81b7688d239bdd13897f418ffeec3bb6a0ec1aa62a8c986ce8bd188ebb40d6e" gracePeriod=30 Jan 29 11:20:31 crc kubenswrapper[4593]: I0129 11:20:31.693252 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fd09a34f-e8e0-45ab-8106-550772be304d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": EOF" Jan 29 11:20:31 crc kubenswrapper[4593]: I0129 11:20:31.693422 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fd09a34f-e8e0-45ab-8106-550772be304d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": EOF" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.108384 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.235548 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-nova-metadata-tls-certs\") pod \"78c17a08-712a-47fb-a1eb-f26be532ce98\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.236903 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78c17a08-712a-47fb-a1eb-f26be532ce98-logs\") pod \"78c17a08-712a-47fb-a1eb-f26be532ce98\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.237104 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-config-data\") pod \"78c17a08-712a-47fb-a1eb-f26be532ce98\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.237425 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-combined-ca-bundle\") pod \"78c17a08-712a-47fb-a1eb-f26be532ce98\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.237523 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8jgb\" (UniqueName: \"kubernetes.io/projected/78c17a08-712a-47fb-a1eb-f26be532ce98-kube-api-access-v8jgb\") pod \"78c17a08-712a-47fb-a1eb-f26be532ce98\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.238799 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78c17a08-712a-47fb-a1eb-f26be532ce98-logs" (OuterVolumeSpecName: "logs") pod "78c17a08-712a-47fb-a1eb-f26be532ce98" (UID: "78c17a08-712a-47fb-a1eb-f26be532ce98"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.250849 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78c17a08-712a-47fb-a1eb-f26be532ce98-kube-api-access-v8jgb" (OuterVolumeSpecName: "kube-api-access-v8jgb") pod "78c17a08-712a-47fb-a1eb-f26be532ce98" (UID: "78c17a08-712a-47fb-a1eb-f26be532ce98"). InnerVolumeSpecName "kube-api-access-v8jgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.275558 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78c17a08-712a-47fb-a1eb-f26be532ce98" (UID: "78c17a08-712a-47fb-a1eb-f26be532ce98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.328078 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-config-data" (OuterVolumeSpecName: "config-data") pod "78c17a08-712a-47fb-a1eb-f26be532ce98" (UID: "78c17a08-712a-47fb-a1eb-f26be532ce98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.334903 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "78c17a08-712a-47fb-a1eb-f26be532ce98" (UID: "78c17a08-712a-47fb-a1eb-f26be532ce98"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.340487 4593 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.340514 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78c17a08-712a-47fb-a1eb-f26be532ce98-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.340525 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.340538 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.340548 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8jgb\" (UniqueName: \"kubernetes.io/projected/78c17a08-712a-47fb-a1eb-f26be532ce98-kube-api-access-v8jgb\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.705846 4593 generic.go:334] "Generic (PLEG): container finished" podID="fd09a34f-e8e0-45ab-8106-550772be304d" containerID="cae1e9ac5b4b49b857f39e56a9ed6ae24fecf3dc4a8a8ec02b94e52110cb7594" exitCode=143 Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.705913 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd09a34f-e8e0-45ab-8106-550772be304d","Type":"ContainerDied","Data":"cae1e9ac5b4b49b857f39e56a9ed6ae24fecf3dc4a8a8ec02b94e52110cb7594"} Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.708395 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"78c17a08-712a-47fb-a1eb-f26be532ce98","Type":"ContainerDied","Data":"4cefe4364c2588402ec5dd748f4b5e3fc4e65f94d005770bf05acdcf92ebff76"} Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.708428 4593 scope.go:117] "RemoveContainer" containerID="2a149a3cd3c416e532f08f09e3efa6137160f0dec84f0e59b848968641500164" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.708455 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.749423 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.752646 4593 scope.go:117] "RemoveContainer" containerID="35c4fb91bfd0ce4ebd4422950ffc22b955b4cb92b4cb7a470281bd92f4f21b4d" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.762399 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.775418 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:20:32 crc kubenswrapper[4593]: E0129 11:20:32.776079 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecc4cd76-a47d-4691-906f-d1617455f100" containerName="nova-manage" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.776162 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecc4cd76-a47d-4691-906f-d1617455f100" containerName="nova-manage" Jan 29 11:20:32 crc kubenswrapper[4593]: E0129 11:20:32.776259 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78c17a08-712a-47fb-a1eb-f26be532ce98" containerName="nova-metadata-metadata" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.776339 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="78c17a08-712a-47fb-a1eb-f26be532ce98" containerName="nova-metadata-metadata" Jan 29 11:20:32 crc kubenswrapper[4593]: E0129 11:20:32.776420 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78c17a08-712a-47fb-a1eb-f26be532ce98" containerName="nova-metadata-log" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.776479 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="78c17a08-712a-47fb-a1eb-f26be532ce98" containerName="nova-metadata-log" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.777402 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="78c17a08-712a-47fb-a1eb-f26be532ce98" containerName="nova-metadata-log" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.777591 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecc4cd76-a47d-4691-906f-d1617455f100" containerName="nova-manage" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.778163 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="78c17a08-712a-47fb-a1eb-f26be532ce98" containerName="nova-metadata-metadata" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.779530 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.787104 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.787151 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.790793 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.855354 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.855651 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.856153 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fww9l\" (UniqueName: \"kubernetes.io/projected/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-kube-api-access-fww9l\") pod \"nova-metadata-0\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.856298 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-logs\") pod \"nova-metadata-0\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.856383 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-config-data\") pod \"nova-metadata-0\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.957804 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-logs\") pod \"nova-metadata-0\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.957887 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-config-data\") pod \"nova-metadata-0\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.957963 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.958035 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.958108 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fww9l\" (UniqueName: \"kubernetes.io/projected/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-kube-api-access-fww9l\") pod \"nova-metadata-0\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.959942 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-logs\") pod \"nova-metadata-0\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.965429 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.974669 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.975356 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fww9l\" (UniqueName: \"kubernetes.io/projected/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-kube-api-access-fww9l\") pod \"nova-metadata-0\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.975494 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-config-data\") pod \"nova-metadata-0\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " pod="openstack/nova-metadata-0" Jan 29 11:20:33 crc kubenswrapper[4593]: I0129 11:20:33.086243 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78c17a08-712a-47fb-a1eb-f26be532ce98" path="/var/lib/kubelet/pods/78c17a08-712a-47fb-a1eb-f26be532ce98/volumes" Jan 29 11:20:33 crc kubenswrapper[4593]: I0129 11:20:33.108425 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:20:33 crc kubenswrapper[4593]: I0129 11:20:33.645418 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:20:33 crc kubenswrapper[4593]: W0129 11:20:33.658235 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaa00230_26f8_4fa7_b32c_994ec82a6ac4.slice/crio-185a6935f58efd39bffafb91700164ea93f85ee3879bc888a2a51ac02343ec6a WatchSource:0}: Error finding container 185a6935f58efd39bffafb91700164ea93f85ee3879bc888a2a51ac02343ec6a: Status 404 returned error can't find the container with id 185a6935f58efd39bffafb91700164ea93f85ee3879bc888a2a51ac02343ec6a Jan 29 11:20:33 crc kubenswrapper[4593]: I0129 11:20:33.725491 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eaa00230-26f8-4fa7-b32c-994ec82a6ac4","Type":"ContainerStarted","Data":"185a6935f58efd39bffafb91700164ea93f85ee3879bc888a2a51ac02343ec6a"} Jan 29 11:20:34 crc kubenswrapper[4593]: I0129 11:20:34.743911 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eaa00230-26f8-4fa7-b32c-994ec82a6ac4","Type":"ContainerStarted","Data":"cf41dd0fb5a7b655b2dfa2beee5825aad4a8df4c8f985e8aebe9c425662911df"} Jan 29 11:20:34 crc kubenswrapper[4593]: I0129 11:20:34.745391 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eaa00230-26f8-4fa7-b32c-994ec82a6ac4","Type":"ContainerStarted","Data":"24c6fe2689133cb0ec4931234ff5577d826f6e6f68c542687334c8d0dfe09c4c"} Jan 29 11:20:34 crc kubenswrapper[4593]: I0129 11:20:34.771870 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.771844915 podStartE2EDuration="2.771844915s" podCreationTimestamp="2026-01-29 11:20:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:20:34.763046417 +0000 UTC m=+1300.636080608" watchObservedRunningTime="2026-01-29 11:20:34.771844915 +0000 UTC m=+1300.644879106" Jan 29 11:20:34 crc kubenswrapper[4593]: I0129 11:20:34.911829 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 29 11:20:35 crc kubenswrapper[4593]: I0129 11:20:35.760416 4593 generic.go:334] "Generic (PLEG): container finished" podID="54be0c9a-2dea-467c-afa6-230000d9ccfa" containerID="660df2719e4927e909a269c0af10ce5b75a1a0017c3734f8e647f89f3520914c" exitCode=0 Jan 29 11:20:35 crc kubenswrapper[4593]: I0129 11:20:35.760757 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"54be0c9a-2dea-467c-afa6-230000d9ccfa","Type":"ContainerDied","Data":"660df2719e4927e909a269c0af10ce5b75a1a0017c3734f8e647f89f3520914c"} Jan 29 11:20:35 crc kubenswrapper[4593]: I0129 11:20:35.760921 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"54be0c9a-2dea-467c-afa6-230000d9ccfa","Type":"ContainerDied","Data":"7a4e7135bde371deba18f2e2d879e899cf14dcee993b634bcfe74d5b004e721e"} Jan 29 11:20:35 crc kubenswrapper[4593]: I0129 11:20:35.760960 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a4e7135bde371deba18f2e2d879e899cf14dcee993b634bcfe74d5b004e721e" Jan 29 11:20:35 crc kubenswrapper[4593]: I0129 11:20:35.782743 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 11:20:35 crc kubenswrapper[4593]: I0129 11:20:35.886033 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7btbr\" (UniqueName: \"kubernetes.io/projected/54be0c9a-2dea-467c-afa6-230000d9ccfa-kube-api-access-7btbr\") pod \"54be0c9a-2dea-467c-afa6-230000d9ccfa\" (UID: \"54be0c9a-2dea-467c-afa6-230000d9ccfa\") " Jan 29 11:20:35 crc kubenswrapper[4593]: I0129 11:20:35.886284 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54be0c9a-2dea-467c-afa6-230000d9ccfa-combined-ca-bundle\") pod \"54be0c9a-2dea-467c-afa6-230000d9ccfa\" (UID: \"54be0c9a-2dea-467c-afa6-230000d9ccfa\") " Jan 29 11:20:35 crc kubenswrapper[4593]: I0129 11:20:35.886405 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54be0c9a-2dea-467c-afa6-230000d9ccfa-config-data\") pod \"54be0c9a-2dea-467c-afa6-230000d9ccfa\" (UID: \"54be0c9a-2dea-467c-afa6-230000d9ccfa\") " Jan 29 11:20:35 crc kubenswrapper[4593]: I0129 11:20:35.894970 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54be0c9a-2dea-467c-afa6-230000d9ccfa-kube-api-access-7btbr" (OuterVolumeSpecName: "kube-api-access-7btbr") pod "54be0c9a-2dea-467c-afa6-230000d9ccfa" (UID: "54be0c9a-2dea-467c-afa6-230000d9ccfa"). InnerVolumeSpecName "kube-api-access-7btbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:20:35 crc kubenswrapper[4593]: I0129 11:20:35.916579 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54be0c9a-2dea-467c-afa6-230000d9ccfa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54be0c9a-2dea-467c-afa6-230000d9ccfa" (UID: "54be0c9a-2dea-467c-afa6-230000d9ccfa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:35 crc kubenswrapper[4593]: I0129 11:20:35.924089 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54be0c9a-2dea-467c-afa6-230000d9ccfa-config-data" (OuterVolumeSpecName: "config-data") pod "54be0c9a-2dea-467c-afa6-230000d9ccfa" (UID: "54be0c9a-2dea-467c-afa6-230000d9ccfa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:35 crc kubenswrapper[4593]: I0129 11:20:35.989573 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54be0c9a-2dea-467c-afa6-230000d9ccfa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:35 crc kubenswrapper[4593]: I0129 11:20:35.991513 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54be0c9a-2dea-467c-afa6-230000d9ccfa-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:35 crc kubenswrapper[4593]: I0129 11:20:35.991554 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7btbr\" (UniqueName: \"kubernetes.io/projected/54be0c9a-2dea-467c-afa6-230000d9ccfa-kube-api-access-7btbr\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:36 crc kubenswrapper[4593]: I0129 11:20:36.767558 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 11:20:36 crc kubenswrapper[4593]: I0129 11:20:36.802541 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:20:36 crc kubenswrapper[4593]: I0129 11:20:36.818368 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:20:36 crc kubenswrapper[4593]: I0129 11:20:36.871954 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:20:36 crc kubenswrapper[4593]: E0129 11:20:36.872595 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54be0c9a-2dea-467c-afa6-230000d9ccfa" containerName="nova-scheduler-scheduler" Jan 29 11:20:36 crc kubenswrapper[4593]: I0129 11:20:36.872621 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="54be0c9a-2dea-467c-afa6-230000d9ccfa" containerName="nova-scheduler-scheduler" Jan 29 11:20:36 crc kubenswrapper[4593]: I0129 11:20:36.872884 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="54be0c9a-2dea-467c-afa6-230000d9ccfa" containerName="nova-scheduler-scheduler" Jan 29 11:20:36 crc kubenswrapper[4593]: I0129 11:20:36.873862 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 11:20:36 crc kubenswrapper[4593]: I0129 11:20:36.879742 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 11:20:36 crc kubenswrapper[4593]: I0129 11:20:36.887364 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:20:37 crc kubenswrapper[4593]: I0129 11:20:37.012830 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40dd43f0-0621-4358-8019-b58cd5fbcc79-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"40dd43f0-0621-4358-8019-b58cd5fbcc79\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:37 crc kubenswrapper[4593]: I0129 11:20:37.013161 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpt5m\" (UniqueName: \"kubernetes.io/projected/40dd43f0-0621-4358-8019-b58cd5fbcc79-kube-api-access-kpt5m\") pod \"nova-scheduler-0\" (UID: \"40dd43f0-0621-4358-8019-b58cd5fbcc79\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:37 crc kubenswrapper[4593]: I0129 11:20:37.013322 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40dd43f0-0621-4358-8019-b58cd5fbcc79-config-data\") pod \"nova-scheduler-0\" (UID: \"40dd43f0-0621-4358-8019-b58cd5fbcc79\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:37 crc kubenswrapper[4593]: I0129 11:20:37.086031 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54be0c9a-2dea-467c-afa6-230000d9ccfa" path="/var/lib/kubelet/pods/54be0c9a-2dea-467c-afa6-230000d9ccfa/volumes" Jan 29 11:20:37 crc kubenswrapper[4593]: I0129 11:20:37.115160 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40dd43f0-0621-4358-8019-b58cd5fbcc79-config-data\") pod \"nova-scheduler-0\" (UID: \"40dd43f0-0621-4358-8019-b58cd5fbcc79\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:37 crc kubenswrapper[4593]: I0129 11:20:37.115356 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40dd43f0-0621-4358-8019-b58cd5fbcc79-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"40dd43f0-0621-4358-8019-b58cd5fbcc79\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:37 crc kubenswrapper[4593]: I0129 11:20:37.115406 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpt5m\" (UniqueName: \"kubernetes.io/projected/40dd43f0-0621-4358-8019-b58cd5fbcc79-kube-api-access-kpt5m\") pod \"nova-scheduler-0\" (UID: \"40dd43f0-0621-4358-8019-b58cd5fbcc79\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:37 crc kubenswrapper[4593]: I0129 11:20:37.119599 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40dd43f0-0621-4358-8019-b58cd5fbcc79-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"40dd43f0-0621-4358-8019-b58cd5fbcc79\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:37 crc kubenswrapper[4593]: I0129 11:20:37.120414 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40dd43f0-0621-4358-8019-b58cd5fbcc79-config-data\") pod \"nova-scheduler-0\" (UID: \"40dd43f0-0621-4358-8019-b58cd5fbcc79\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:37 crc kubenswrapper[4593]: I0129 11:20:37.136518 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpt5m\" (UniqueName: \"kubernetes.io/projected/40dd43f0-0621-4358-8019-b58cd5fbcc79-kube-api-access-kpt5m\") pod \"nova-scheduler-0\" (UID: \"40dd43f0-0621-4358-8019-b58cd5fbcc79\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:37 crc kubenswrapper[4593]: I0129 11:20:37.205107 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 11:20:37 crc kubenswrapper[4593]: I0129 11:20:37.781206 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.109541 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.109619 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.781906 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.813401 4593 generic.go:334] "Generic (PLEG): container finished" podID="fd09a34f-e8e0-45ab-8106-550772be304d" containerID="c81b7688d239bdd13897f418ffeec3bb6a0ec1aa62a8c986ce8bd188ebb40d6e" exitCode=0 Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.813527 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd09a34f-e8e0-45ab-8106-550772be304d","Type":"ContainerDied","Data":"c81b7688d239bdd13897f418ffeec3bb6a0ec1aa62a8c986ce8bd188ebb40d6e"} Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.813561 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd09a34f-e8e0-45ab-8106-550772be304d","Type":"ContainerDied","Data":"d33e85a542f161cdeff330ae3f58078f90938b3f287467787015c6695fd198e9"} Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.813582 4593 scope.go:117] "RemoveContainer" containerID="c81b7688d239bdd13897f418ffeec3bb6a0ec1aa62a8c986ce8bd188ebb40d6e" Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.813851 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.823596 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"40dd43f0-0621-4358-8019-b58cd5fbcc79","Type":"ContainerStarted","Data":"f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058"} Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.823665 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"40dd43f0-0621-4358-8019-b58cd5fbcc79","Type":"ContainerStarted","Data":"c94ac2729f1f8331d111e95fa7df8974b6fcb7da88f692f7369227d26b750286"} Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.849359 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.849276809 podStartE2EDuration="2.849276809s" podCreationTimestamp="2026-01-29 11:20:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:20:38.845130997 +0000 UTC m=+1304.718165188" watchObservedRunningTime="2026-01-29 11:20:38.849276809 +0000 UTC m=+1304.722311010" Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.870071 4593 scope.go:117] "RemoveContainer" containerID="cae1e9ac5b4b49b857f39e56a9ed6ae24fecf3dc4a8a8ec02b94e52110cb7594" Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.899868 4593 scope.go:117] "RemoveContainer" containerID="c81b7688d239bdd13897f418ffeec3bb6a0ec1aa62a8c986ce8bd188ebb40d6e" Jan 29 11:20:38 crc kubenswrapper[4593]: E0129 11:20:38.900236 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c81b7688d239bdd13897f418ffeec3bb6a0ec1aa62a8c986ce8bd188ebb40d6e\": container with ID starting with c81b7688d239bdd13897f418ffeec3bb6a0ec1aa62a8c986ce8bd188ebb40d6e not found: ID does not exist" containerID="c81b7688d239bdd13897f418ffeec3bb6a0ec1aa62a8c986ce8bd188ebb40d6e" Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.900275 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c81b7688d239bdd13897f418ffeec3bb6a0ec1aa62a8c986ce8bd188ebb40d6e"} err="failed to get container status \"c81b7688d239bdd13897f418ffeec3bb6a0ec1aa62a8c986ce8bd188ebb40d6e\": rpc error: code = NotFound desc = could not find container \"c81b7688d239bdd13897f418ffeec3bb6a0ec1aa62a8c986ce8bd188ebb40d6e\": container with ID starting with c81b7688d239bdd13897f418ffeec3bb6a0ec1aa62a8c986ce8bd188ebb40d6e not found: ID does not exist" Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.900302 4593 scope.go:117] "RemoveContainer" containerID="cae1e9ac5b4b49b857f39e56a9ed6ae24fecf3dc4a8a8ec02b94e52110cb7594" Jan 29 11:20:38 crc kubenswrapper[4593]: E0129 11:20:38.900587 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cae1e9ac5b4b49b857f39e56a9ed6ae24fecf3dc4a8a8ec02b94e52110cb7594\": container with ID starting with cae1e9ac5b4b49b857f39e56a9ed6ae24fecf3dc4a8a8ec02b94e52110cb7594 not found: ID does not exist" containerID="cae1e9ac5b4b49b857f39e56a9ed6ae24fecf3dc4a8a8ec02b94e52110cb7594" Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.900618 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cae1e9ac5b4b49b857f39e56a9ed6ae24fecf3dc4a8a8ec02b94e52110cb7594"} err="failed to get container status \"cae1e9ac5b4b49b857f39e56a9ed6ae24fecf3dc4a8a8ec02b94e52110cb7594\": rpc error: code = NotFound desc = could not find container \"cae1e9ac5b4b49b857f39e56a9ed6ae24fecf3dc4a8a8ec02b94e52110cb7594\": container with ID starting with cae1e9ac5b4b49b857f39e56a9ed6ae24fecf3dc4a8a8ec02b94e52110cb7594 not found: ID does not exist" Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.953570 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9crfj\" (UniqueName: \"kubernetes.io/projected/fd09a34f-e8e0-45ab-8106-550772be304d-kube-api-access-9crfj\") pod \"fd09a34f-e8e0-45ab-8106-550772be304d\" (UID: \"fd09a34f-e8e0-45ab-8106-550772be304d\") " Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.953734 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd09a34f-e8e0-45ab-8106-550772be304d-logs\") pod \"fd09a34f-e8e0-45ab-8106-550772be304d\" (UID: \"fd09a34f-e8e0-45ab-8106-550772be304d\") " Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.953793 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd09a34f-e8e0-45ab-8106-550772be304d-combined-ca-bundle\") pod \"fd09a34f-e8e0-45ab-8106-550772be304d\" (UID: \"fd09a34f-e8e0-45ab-8106-550772be304d\") " Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.953828 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd09a34f-e8e0-45ab-8106-550772be304d-config-data\") pod \"fd09a34f-e8e0-45ab-8106-550772be304d\" (UID: \"fd09a34f-e8e0-45ab-8106-550772be304d\") " Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.955331 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd09a34f-e8e0-45ab-8106-550772be304d-logs" (OuterVolumeSpecName: "logs") pod "fd09a34f-e8e0-45ab-8106-550772be304d" (UID: "fd09a34f-e8e0-45ab-8106-550772be304d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.955771 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd09a34f-e8e0-45ab-8106-550772be304d-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.983065 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd09a34f-e8e0-45ab-8106-550772be304d-kube-api-access-9crfj" (OuterVolumeSpecName: "kube-api-access-9crfj") pod "fd09a34f-e8e0-45ab-8106-550772be304d" (UID: "fd09a34f-e8e0-45ab-8106-550772be304d"). InnerVolumeSpecName "kube-api-access-9crfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.994298 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd09a34f-e8e0-45ab-8106-550772be304d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd09a34f-e8e0-45ab-8106-550772be304d" (UID: "fd09a34f-e8e0-45ab-8106-550772be304d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.994399 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd09a34f-e8e0-45ab-8106-550772be304d-config-data" (OuterVolumeSpecName: "config-data") pod "fd09a34f-e8e0-45ab-8106-550772be304d" (UID: "fd09a34f-e8e0-45ab-8106-550772be304d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.058147 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9crfj\" (UniqueName: \"kubernetes.io/projected/fd09a34f-e8e0-45ab-8106-550772be304d-kube-api-access-9crfj\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.058184 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd09a34f-e8e0-45ab-8106-550772be304d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.058194 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd09a34f-e8e0-45ab-8106-550772be304d-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.137821 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.149665 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.186207 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 11:20:39 crc kubenswrapper[4593]: E0129 11:20:39.187018 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd09a34f-e8e0-45ab-8106-550772be304d" containerName="nova-api-log" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.187158 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd09a34f-e8e0-45ab-8106-550772be304d" containerName="nova-api-log" Jan 29 11:20:39 crc kubenswrapper[4593]: E0129 11:20:39.187259 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd09a34f-e8e0-45ab-8106-550772be304d" containerName="nova-api-api" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.187331 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd09a34f-e8e0-45ab-8106-550772be304d" containerName="nova-api-api" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.187676 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd09a34f-e8e0-45ab-8106-550772be304d" containerName="nova-api-api" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.187816 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd09a34f-e8e0-45ab-8106-550772be304d" containerName="nova-api-log" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.189362 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.192257 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.207911 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.429290 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec186581-a9e6-46bb-9479-118d17b02d68-config-data\") pod \"nova-api-0\" (UID: \"ec186581-a9e6-46bb-9479-118d17b02d68\") " pod="openstack/nova-api-0" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.429441 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgrs2\" (UniqueName: \"kubernetes.io/projected/ec186581-a9e6-46bb-9479-118d17b02d68-kube-api-access-sgrs2\") pod \"nova-api-0\" (UID: \"ec186581-a9e6-46bb-9479-118d17b02d68\") " pod="openstack/nova-api-0" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.429522 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec186581-a9e6-46bb-9479-118d17b02d68-logs\") pod \"nova-api-0\" (UID: \"ec186581-a9e6-46bb-9479-118d17b02d68\") " pod="openstack/nova-api-0" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.429619 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec186581-a9e6-46bb-9479-118d17b02d68-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ec186581-a9e6-46bb-9479-118d17b02d68\") " pod="openstack/nova-api-0" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.530654 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec186581-a9e6-46bb-9479-118d17b02d68-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ec186581-a9e6-46bb-9479-118d17b02d68\") " pod="openstack/nova-api-0" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.531110 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec186581-a9e6-46bb-9479-118d17b02d68-config-data\") pod \"nova-api-0\" (UID: \"ec186581-a9e6-46bb-9479-118d17b02d68\") " pod="openstack/nova-api-0" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.531327 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgrs2\" (UniqueName: \"kubernetes.io/projected/ec186581-a9e6-46bb-9479-118d17b02d68-kube-api-access-sgrs2\") pod \"nova-api-0\" (UID: \"ec186581-a9e6-46bb-9479-118d17b02d68\") " pod="openstack/nova-api-0" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.531495 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec186581-a9e6-46bb-9479-118d17b02d68-logs\") pod \"nova-api-0\" (UID: \"ec186581-a9e6-46bb-9479-118d17b02d68\") " pod="openstack/nova-api-0" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.531963 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec186581-a9e6-46bb-9479-118d17b02d68-logs\") pod \"nova-api-0\" (UID: \"ec186581-a9e6-46bb-9479-118d17b02d68\") " pod="openstack/nova-api-0" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.536550 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec186581-a9e6-46bb-9479-118d17b02d68-config-data\") pod \"nova-api-0\" (UID: \"ec186581-a9e6-46bb-9479-118d17b02d68\") " pod="openstack/nova-api-0" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.560889 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec186581-a9e6-46bb-9479-118d17b02d68-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ec186581-a9e6-46bb-9479-118d17b02d68\") " pod="openstack/nova-api-0" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.564242 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgrs2\" (UniqueName: \"kubernetes.io/projected/ec186581-a9e6-46bb-9479-118d17b02d68-kube-api-access-sgrs2\") pod \"nova-api-0\" (UID: \"ec186581-a9e6-46bb-9479-118d17b02d68\") " pod="openstack/nova-api-0" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.807236 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:20:40 crc kubenswrapper[4593]: I0129 11:20:40.297666 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:20:40 crc kubenswrapper[4593]: I0129 11:20:40.853136 4593 generic.go:334] "Generic (PLEG): container finished" podID="c4d30b0b-741b-4275-bcd3-65f27a294d54" containerID="becc277c4dab17e63d11203d4fe1da3af35724523a182bc72abe031b3a628c8a" exitCode=0 Jan 29 11:20:40 crc kubenswrapper[4593]: I0129 11:20:40.853261 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wc9fh" event={"ID":"c4d30b0b-741b-4275-bcd3-65f27a294d54","Type":"ContainerDied","Data":"becc277c4dab17e63d11203d4fe1da3af35724523a182bc72abe031b3a628c8a"} Jan 29 11:20:40 crc kubenswrapper[4593]: I0129 11:20:40.855490 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec186581-a9e6-46bb-9479-118d17b02d68","Type":"ContainerStarted","Data":"339836b893ef773758f5cc7b98358356a20301a3f047614f5c37232d52e5e9db"} Jan 29 11:20:40 crc kubenswrapper[4593]: I0129 11:20:40.855531 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec186581-a9e6-46bb-9479-118d17b02d68","Type":"ContainerStarted","Data":"f7db3d2de4fdf878656547d9c3589d171005e852c5677ab4b1055551daeb9535"} Jan 29 11:20:41 crc kubenswrapper[4593]: I0129 11:20:41.086496 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd09a34f-e8e0-45ab-8106-550772be304d" path="/var/lib/kubelet/pods/fd09a34f-e8e0-45ab-8106-550772be304d/volumes" Jan 29 11:20:41 crc kubenswrapper[4593]: I0129 11:20:41.105477 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:20:41 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:20:41 crc kubenswrapper[4593]: > Jan 29 11:20:41 crc kubenswrapper[4593]: I0129 11:20:41.871790 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec186581-a9e6-46bb-9479-118d17b02d68","Type":"ContainerStarted","Data":"e2dff2a6a81eaa182c9da8785f80eca46bc877d24f3ff7ddcedb8630f6e64bf5"} Jan 29 11:20:41 crc kubenswrapper[4593]: I0129 11:20:41.901751 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.901716146 podStartE2EDuration="2.901716146s" podCreationTimestamp="2026-01-29 11:20:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:20:41.896885514 +0000 UTC m=+1307.769919715" watchObservedRunningTime="2026-01-29 11:20:41.901716146 +0000 UTC m=+1307.774750337" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.207327 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.262362 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wc9fh" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.357119 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm2nz\" (UniqueName: \"kubernetes.io/projected/c4d30b0b-741b-4275-bcd3-65f27a294d54-kube-api-access-cm2nz\") pod \"c4d30b0b-741b-4275-bcd3-65f27a294d54\" (UID: \"c4d30b0b-741b-4275-bcd3-65f27a294d54\") " Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.357180 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-config-data\") pod \"c4d30b0b-741b-4275-bcd3-65f27a294d54\" (UID: \"c4d30b0b-741b-4275-bcd3-65f27a294d54\") " Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.357291 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-combined-ca-bundle\") pod \"c4d30b0b-741b-4275-bcd3-65f27a294d54\" (UID: \"c4d30b0b-741b-4275-bcd3-65f27a294d54\") " Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.357406 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-scripts\") pod \"c4d30b0b-741b-4275-bcd3-65f27a294d54\" (UID: \"c4d30b0b-741b-4275-bcd3-65f27a294d54\") " Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.363615 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4d30b0b-741b-4275-bcd3-65f27a294d54-kube-api-access-cm2nz" (OuterVolumeSpecName: "kube-api-access-cm2nz") pod "c4d30b0b-741b-4275-bcd3-65f27a294d54" (UID: "c4d30b0b-741b-4275-bcd3-65f27a294d54"). InnerVolumeSpecName "kube-api-access-cm2nz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.367842 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-scripts" (OuterVolumeSpecName: "scripts") pod "c4d30b0b-741b-4275-bcd3-65f27a294d54" (UID: "c4d30b0b-741b-4275-bcd3-65f27a294d54"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.389547 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4d30b0b-741b-4275-bcd3-65f27a294d54" (UID: "c4d30b0b-741b-4275-bcd3-65f27a294d54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.404433 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-config-data" (OuterVolumeSpecName: "config-data") pod "c4d30b0b-741b-4275-bcd3-65f27a294d54" (UID: "c4d30b0b-741b-4275-bcd3-65f27a294d54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.462846 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm2nz\" (UniqueName: \"kubernetes.io/projected/c4d30b0b-741b-4275-bcd3-65f27a294d54-kube-api-access-cm2nz\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.462896 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.462910 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.462921 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.881128 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wc9fh" event={"ID":"c4d30b0b-741b-4275-bcd3-65f27a294d54","Type":"ContainerDied","Data":"8dc46203d3c6c5d1cde15f072717e4362e4df9ca33b0077c8bfb3bc44346b805"} Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.881216 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8dc46203d3c6c5d1cde15f072717e4362e4df9ca33b0077c8bfb3bc44346b805" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.881158 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wc9fh" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.984613 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 11:20:42 crc kubenswrapper[4593]: E0129 11:20:42.985046 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4d30b0b-741b-4275-bcd3-65f27a294d54" containerName="nova-cell1-conductor-db-sync" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.985065 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4d30b0b-741b-4275-bcd3-65f27a294d54" containerName="nova-cell1-conductor-db-sync" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.985297 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4d30b0b-741b-4275-bcd3-65f27a294d54" containerName="nova-cell1-conductor-db-sync" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.986081 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.989906 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 11:20:43 crc kubenswrapper[4593]: I0129 11:20:43.006512 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 11:20:43 crc kubenswrapper[4593]: I0129 11:20:43.074494 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bee10dce-c68f-47f4-84e0-623f276964d8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bee10dce-c68f-47f4-84e0-623f276964d8\") " pod="openstack/nova-cell1-conductor-0" Jan 29 11:20:43 crc kubenswrapper[4593]: I0129 11:20:43.074703 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsgjk\" (UniqueName: \"kubernetes.io/projected/bee10dce-c68f-47f4-84e0-623f276964d8-kube-api-access-gsgjk\") pod \"nova-cell1-conductor-0\" (UID: \"bee10dce-c68f-47f4-84e0-623f276964d8\") " pod="openstack/nova-cell1-conductor-0" Jan 29 11:20:43 crc kubenswrapper[4593]: I0129 11:20:43.075149 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bee10dce-c68f-47f4-84e0-623f276964d8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bee10dce-c68f-47f4-84e0-623f276964d8\") " pod="openstack/nova-cell1-conductor-0" Jan 29 11:20:43 crc kubenswrapper[4593]: I0129 11:20:43.109771 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 11:20:43 crc kubenswrapper[4593]: I0129 11:20:43.109835 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 11:20:43 crc kubenswrapper[4593]: I0129 11:20:43.176472 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bee10dce-c68f-47f4-84e0-623f276964d8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bee10dce-c68f-47f4-84e0-623f276964d8\") " pod="openstack/nova-cell1-conductor-0" Jan 29 11:20:43 crc kubenswrapper[4593]: I0129 11:20:43.176670 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsgjk\" (UniqueName: \"kubernetes.io/projected/bee10dce-c68f-47f4-84e0-623f276964d8-kube-api-access-gsgjk\") pod \"nova-cell1-conductor-0\" (UID: \"bee10dce-c68f-47f4-84e0-623f276964d8\") " pod="openstack/nova-cell1-conductor-0" Jan 29 11:20:43 crc kubenswrapper[4593]: I0129 11:20:43.176720 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bee10dce-c68f-47f4-84e0-623f276964d8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bee10dce-c68f-47f4-84e0-623f276964d8\") " pod="openstack/nova-cell1-conductor-0" Jan 29 11:20:43 crc kubenswrapper[4593]: I0129 11:20:43.181972 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bee10dce-c68f-47f4-84e0-623f276964d8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bee10dce-c68f-47f4-84e0-623f276964d8\") " pod="openstack/nova-cell1-conductor-0" Jan 29 11:20:43 crc kubenswrapper[4593]: I0129 11:20:43.182687 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bee10dce-c68f-47f4-84e0-623f276964d8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bee10dce-c68f-47f4-84e0-623f276964d8\") " pod="openstack/nova-cell1-conductor-0" Jan 29 11:20:43 crc kubenswrapper[4593]: I0129 11:20:43.199177 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsgjk\" (UniqueName: \"kubernetes.io/projected/bee10dce-c68f-47f4-84e0-623f276964d8-kube-api-access-gsgjk\") pod \"nova-cell1-conductor-0\" (UID: \"bee10dce-c68f-47f4-84e0-623f276964d8\") " pod="openstack/nova-cell1-conductor-0" Jan 29 11:20:43 crc kubenswrapper[4593]: I0129 11:20:43.309802 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 11:20:43 crc kubenswrapper[4593]: I0129 11:20:43.802541 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 11:20:44 crc kubenswrapper[4593]: I0129 11:20:44.021126 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"bee10dce-c68f-47f4-84e0-623f276964d8","Type":"ContainerStarted","Data":"5522e839542cc231908bac44f370a5152779d196633377928af10d74f71a95b0"} Jan 29 11:20:44 crc kubenswrapper[4593]: I0129 11:20:44.125905 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="eaa00230-26f8-4fa7-b32c-994ec82a6ac4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:20:44 crc kubenswrapper[4593]: I0129 11:20:44.126008 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="eaa00230-26f8-4fa7-b32c-994ec82a6ac4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:20:44 crc kubenswrapper[4593]: I0129 11:20:44.910468 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.042813 4593 generic.go:334] "Generic (PLEG): container finished" podID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerID="79e5fad4ce8a136539fe157f20b007cd9dda01813dc5bd26b79f98167ce8f3c8" exitCode=137 Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.042838 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf566cdb-kbm9z" event={"ID":"b9761a4f-8669-4e74-9f8e-ed8b9778af11","Type":"ContainerDied","Data":"79e5fad4ce8a136539fe157f20b007cd9dda01813dc5bd26b79f98167ce8f3c8"} Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.048965 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"bee10dce-c68f-47f4-84e0-623f276964d8","Type":"ContainerStarted","Data":"4d614dc400670f15f9dd67948b7cdfabe334a78d7e990ee23c2014481f120b38"} Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.049119 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.086823 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.086801608 podStartE2EDuration="3.086801608s" podCreationTimestamp="2026-01-29 11:20:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:20:45.069465757 +0000 UTC m=+1310.942499988" watchObservedRunningTime="2026-01-29 11:20:45.086801608 +0000 UTC m=+1310.959835799" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.518896 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.561303 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9761a4f-8669-4e74-9f8e-ed8b9778af11-scripts\") pod \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.561385 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-horizon-tls-certs\") pod \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.561523 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9761a4f-8669-4e74-9f8e-ed8b9778af11-config-data\") pod \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.561693 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-horizon-secret-key\") pod \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.561754 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bjjr\" (UniqueName: \"kubernetes.io/projected/b9761a4f-8669-4e74-9f8e-ed8b9778af11-kube-api-access-5bjjr\") pod \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.561781 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-combined-ca-bundle\") pod \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.561812 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9761a4f-8669-4e74-9f8e-ed8b9778af11-logs\") pod \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.562829 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9761a4f-8669-4e74-9f8e-ed8b9778af11-logs" (OuterVolumeSpecName: "logs") pod "b9761a4f-8669-4e74-9f8e-ed8b9778af11" (UID: "b9761a4f-8669-4e74-9f8e-ed8b9778af11"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.580409 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9761a4f-8669-4e74-9f8e-ed8b9778af11-kube-api-access-5bjjr" (OuterVolumeSpecName: "kube-api-access-5bjjr") pod "b9761a4f-8669-4e74-9f8e-ed8b9778af11" (UID: "b9761a4f-8669-4e74-9f8e-ed8b9778af11"). InnerVolumeSpecName "kube-api-access-5bjjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.582274 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "b9761a4f-8669-4e74-9f8e-ed8b9778af11" (UID: "b9761a4f-8669-4e74-9f8e-ed8b9778af11"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.615427 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9761a4f-8669-4e74-9f8e-ed8b9778af11-scripts" (OuterVolumeSpecName: "scripts") pod "b9761a4f-8669-4e74-9f8e-ed8b9778af11" (UID: "b9761a4f-8669-4e74-9f8e-ed8b9778af11"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.629749 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9761a4f-8669-4e74-9f8e-ed8b9778af11" (UID: "b9761a4f-8669-4e74-9f8e-ed8b9778af11"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.638483 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9761a4f-8669-4e74-9f8e-ed8b9778af11-config-data" (OuterVolumeSpecName: "config-data") pod "b9761a4f-8669-4e74-9f8e-ed8b9778af11" (UID: "b9761a4f-8669-4e74-9f8e-ed8b9778af11"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.664128 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9761a4f-8669-4e74-9f8e-ed8b9778af11-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.664191 4593 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.664212 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bjjr\" (UniqueName: \"kubernetes.io/projected/b9761a4f-8669-4e74-9f8e-ed8b9778af11-kube-api-access-5bjjr\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.664247 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.664259 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9761a4f-8669-4e74-9f8e-ed8b9778af11-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.664269 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9761a4f-8669-4e74-9f8e-ed8b9778af11-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.664506 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "b9761a4f-8669-4e74-9f8e-ed8b9778af11" (UID: "b9761a4f-8669-4e74-9f8e-ed8b9778af11"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.766665 4593 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:46 crc kubenswrapper[4593]: I0129 11:20:46.063499 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf566cdb-kbm9z" event={"ID":"b9761a4f-8669-4e74-9f8e-ed8b9778af11","Type":"ContainerDied","Data":"ce4a773b0ca614eb00194b9785007fb66ed555cdb9faf1064f6db03538dbdfaf"} Jan 29 11:20:46 crc kubenswrapper[4593]: I0129 11:20:46.063604 4593 scope.go:117] "RemoveContainer" containerID="3d261a3c68b7921bd914d1e7f66292aa43d7dcf78e137210f6cac9b61a927909" Jan 29 11:20:46 crc kubenswrapper[4593]: I0129 11:20:46.064558 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:20:46 crc kubenswrapper[4593]: I0129 11:20:46.112730 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-fbf566cdb-kbm9z"] Jan 29 11:20:46 crc kubenswrapper[4593]: I0129 11:20:46.123375 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-fbf566cdb-kbm9z"] Jan 29 11:20:46 crc kubenswrapper[4593]: I0129 11:20:46.259499 4593 scope.go:117] "RemoveContainer" containerID="79e5fad4ce8a136539fe157f20b007cd9dda01813dc5bd26b79f98167ce8f3c8" Jan 29 11:20:47 crc kubenswrapper[4593]: I0129 11:20:47.088095 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" path="/var/lib/kubelet/pods/b9761a4f-8669-4e74-9f8e-ed8b9778af11/volumes" Jan 29 11:20:47 crc kubenswrapper[4593]: I0129 11:20:47.206103 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 11:20:47 crc kubenswrapper[4593]: I0129 11:20:47.238439 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 11:20:48 crc kubenswrapper[4593]: I0129 11:20:48.121375 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 11:20:49 crc kubenswrapper[4593]: I0129 11:20:49.808320 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 11:20:49 crc kubenswrapper[4593]: I0129 11:20:49.808731 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 11:20:50 crc kubenswrapper[4593]: I0129 11:20:50.891869 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ec186581-a9e6-46bb-9479-118d17b02d68" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.199:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:20:50 crc kubenswrapper[4593]: I0129 11:20:50.891869 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ec186581-a9e6-46bb-9479-118d17b02d68" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.199:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:20:51 crc kubenswrapper[4593]: I0129 11:20:51.105460 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:20:51 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:20:51 crc kubenswrapper[4593]: > Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.024288 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.133884 4593 generic.go:334] "Generic (PLEG): container finished" podID="d3bc8fe6-dc7c-4731-902d-67d12a0bfef8" containerID="e73184b2646dc788b31f373cb46f214041bd4afe8f28004c1f0ce17b08c20d69" exitCode=137 Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.133950 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8","Type":"ContainerDied","Data":"e73184b2646dc788b31f373cb46f214041bd4afe8f28004c1f0ce17b08c20d69"} Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.133983 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8","Type":"ContainerDied","Data":"afca7bf4b299e69d695725ee22c529f3ea659c864ce859245236b6ced858cb90"} Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.134005 4593 scope.go:117] "RemoveContainer" containerID="e73184b2646dc788b31f373cb46f214041bd4afe8f28004c1f0ce17b08c20d69" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.134164 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.162584 4593 scope.go:117] "RemoveContainer" containerID="e73184b2646dc788b31f373cb46f214041bd4afe8f28004c1f0ce17b08c20d69" Jan 29 11:20:52 crc kubenswrapper[4593]: E0129 11:20:52.163436 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e73184b2646dc788b31f373cb46f214041bd4afe8f28004c1f0ce17b08c20d69\": container with ID starting with e73184b2646dc788b31f373cb46f214041bd4afe8f28004c1f0ce17b08c20d69 not found: ID does not exist" containerID="e73184b2646dc788b31f373cb46f214041bd4afe8f28004c1f0ce17b08c20d69" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.163486 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e73184b2646dc788b31f373cb46f214041bd4afe8f28004c1f0ce17b08c20d69"} err="failed to get container status \"e73184b2646dc788b31f373cb46f214041bd4afe8f28004c1f0ce17b08c20d69\": rpc error: code = NotFound desc = could not find container \"e73184b2646dc788b31f373cb46f214041bd4afe8f28004c1f0ce17b08c20d69\": container with ID starting with e73184b2646dc788b31f373cb46f214041bd4afe8f28004c1f0ce17b08c20d69 not found: ID does not exist" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.174252 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-config-data\") pod \"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8\" (UID: \"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8\") " Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.174591 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kpzt\" (UniqueName: \"kubernetes.io/projected/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-kube-api-access-5kpzt\") pod \"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8\" (UID: \"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8\") " Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.174670 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-combined-ca-bundle\") pod \"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8\" (UID: \"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8\") " Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.185191 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-kube-api-access-5kpzt" (OuterVolumeSpecName: "kube-api-access-5kpzt") pod "d3bc8fe6-dc7c-4731-902d-67d12a0bfef8" (UID: "d3bc8fe6-dc7c-4731-902d-67d12a0bfef8"). InnerVolumeSpecName "kube-api-access-5kpzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.219968 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-config-data" (OuterVolumeSpecName: "config-data") pod "d3bc8fe6-dc7c-4731-902d-67d12a0bfef8" (UID: "d3bc8fe6-dc7c-4731-902d-67d12a0bfef8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.244811 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3bc8fe6-dc7c-4731-902d-67d12a0bfef8" (UID: "d3bc8fe6-dc7c-4731-902d-67d12a0bfef8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.277402 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kpzt\" (UniqueName: \"kubernetes.io/projected/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-kube-api-access-5kpzt\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.277443 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.277453 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.484725 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.505687 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.525582 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 11:20:52 crc kubenswrapper[4593]: E0129 11:20:52.526235 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3bc8fe6-dc7c-4731-902d-67d12a0bfef8" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.526268 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3bc8fe6-dc7c-4731-902d-67d12a0bfef8" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 11:20:52 crc kubenswrapper[4593]: E0129 11:20:52.526306 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.526315 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" Jan 29 11:20:52 crc kubenswrapper[4593]: E0129 11:20:52.526328 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.526336 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" Jan 29 11:20:52 crc kubenswrapper[4593]: E0129 11:20:52.526388 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon-log" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.526398 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon-log" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.526668 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3bc8fe6-dc7c-4731-902d-67d12a0bfef8" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.526691 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.526703 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.526722 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon-log" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.526739 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.527686 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.534540 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.536269 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.536829 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.537615 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.685432 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b25e9a9-4f12-4b7f-9001-74b6c3feb118-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b25e9a9-4f12-4b7f-9001-74b6c3feb118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.685946 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b25e9a9-4f12-4b7f-9001-74b6c3feb118-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b25e9a9-4f12-4b7f-9001-74b6c3feb118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.686080 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b25e9a9-4f12-4b7f-9001-74b6c3feb118-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b25e9a9-4f12-4b7f-9001-74b6c3feb118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.686115 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7m4l\" (UniqueName: \"kubernetes.io/projected/0b25e9a9-4f12-4b7f-9001-74b6c3feb118-kube-api-access-c7m4l\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b25e9a9-4f12-4b7f-9001-74b6c3feb118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.686264 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b25e9a9-4f12-4b7f-9001-74b6c3feb118-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b25e9a9-4f12-4b7f-9001-74b6c3feb118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.792619 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b25e9a9-4f12-4b7f-9001-74b6c3feb118-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b25e9a9-4f12-4b7f-9001-74b6c3feb118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.792700 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7m4l\" (UniqueName: \"kubernetes.io/projected/0b25e9a9-4f12-4b7f-9001-74b6c3feb118-kube-api-access-c7m4l\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b25e9a9-4f12-4b7f-9001-74b6c3feb118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.792782 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b25e9a9-4f12-4b7f-9001-74b6c3feb118-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b25e9a9-4f12-4b7f-9001-74b6c3feb118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.792881 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b25e9a9-4f12-4b7f-9001-74b6c3feb118-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b25e9a9-4f12-4b7f-9001-74b6c3feb118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.792986 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b25e9a9-4f12-4b7f-9001-74b6c3feb118-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b25e9a9-4f12-4b7f-9001-74b6c3feb118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.798532 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b25e9a9-4f12-4b7f-9001-74b6c3feb118-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b25e9a9-4f12-4b7f-9001-74b6c3feb118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.810582 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b25e9a9-4f12-4b7f-9001-74b6c3feb118-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b25e9a9-4f12-4b7f-9001-74b6c3feb118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.822538 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b25e9a9-4f12-4b7f-9001-74b6c3feb118-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b25e9a9-4f12-4b7f-9001-74b6c3feb118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.824210 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b25e9a9-4f12-4b7f-9001-74b6c3feb118-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b25e9a9-4f12-4b7f-9001-74b6c3feb118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.832411 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7m4l\" (UniqueName: \"kubernetes.io/projected/0b25e9a9-4f12-4b7f-9001-74b6c3feb118-kube-api-access-c7m4l\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b25e9a9-4f12-4b7f-9001-74b6c3feb118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.914126 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:53 crc kubenswrapper[4593]: I0129 11:20:53.089595 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3bc8fe6-dc7c-4731-902d-67d12a0bfef8" path="/var/lib/kubelet/pods/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8/volumes" Jan 29 11:20:53 crc kubenswrapper[4593]: I0129 11:20:53.121291 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 11:20:53 crc kubenswrapper[4593]: I0129 11:20:53.128149 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 11:20:53 crc kubenswrapper[4593]: I0129 11:20:53.129252 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 11:20:53 crc kubenswrapper[4593]: I0129 11:20:53.190078 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 11:20:53 crc kubenswrapper[4593]: I0129 11:20:53.352123 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 29 11:20:53 crc kubenswrapper[4593]: I0129 11:20:53.564509 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 11:20:54 crc kubenswrapper[4593]: I0129 11:20:54.179262 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0b25e9a9-4f12-4b7f-9001-74b6c3feb118","Type":"ContainerStarted","Data":"ae8d97c1afea9ef91d94a960a07b3449ddd6e5831b50f7f17248b8fdd70aa718"} Jan 29 11:20:54 crc kubenswrapper[4593]: I0129 11:20:54.179521 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0b25e9a9-4f12-4b7f-9001-74b6c3feb118","Type":"ContainerStarted","Data":"1a4d9a57fcbf76afd97da28948543e0ee1cacf12ce28e788ed4aadf97075d766"} Jan 29 11:20:54 crc kubenswrapper[4593]: I0129 11:20:54.203740 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.203715234 podStartE2EDuration="2.203715234s" podCreationTimestamp="2026-01-29 11:20:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:20:54.199062928 +0000 UTC m=+1320.072097119" watchObservedRunningTime="2026-01-29 11:20:54.203715234 +0000 UTC m=+1320.076749425" Jan 29 11:20:57 crc kubenswrapper[4593]: I0129 11:20:57.914552 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:59 crc kubenswrapper[4593]: I0129 11:20:59.813291 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 11:20:59 crc kubenswrapper[4593]: I0129 11:20:59.816375 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 11:20:59 crc kubenswrapper[4593]: I0129 11:20:59.817456 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 11:20:59 crc kubenswrapper[4593]: I0129 11:20:59.829276 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.124879 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.179973 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.250056 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.253296 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.578032 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-q9gws"] Jan 29 11:21:00 crc kubenswrapper[4593]: E0129 11:21:00.578568 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.578603 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.580138 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.626985 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-q9gws"] Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.724579 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.724679 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.724727 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-config\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.724766 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.724897 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkqvv\" (UniqueName: \"kubernetes.io/projected/d4645d9f-a4ac-4004-b76e-8f3652a300e6-kube-api-access-lkqvv\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.724958 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.826489 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.826617 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkqvv\" (UniqueName: \"kubernetes.io/projected/d4645d9f-a4ac-4004-b76e-8f3652a300e6-kube-api-access-lkqvv\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.826688 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.826752 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.826768 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.826794 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-config\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.828021 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.828053 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.828624 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.828801 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-config\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.828794 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.866521 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkqvv\" (UniqueName: \"kubernetes.io/projected/d4645d9f-a4ac-4004-b76e-8f3652a300e6-kube-api-access-lkqvv\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.938590 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:01 crc kubenswrapper[4593]: I0129 11:21:01.325173 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k4l8n"] Jan 29 11:21:01 crc kubenswrapper[4593]: I0129 11:21:01.325717 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" containerID="cri-o://24a48dac79f9737c737a9a6c9feb17fa992bbfb7616bde5d11369bae535e02b2" gracePeriod=2 Jan 29 11:21:01 crc kubenswrapper[4593]: I0129 11:21:01.494729 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-q9gws"] Jan 29 11:21:01 crc kubenswrapper[4593]: E0129 11:21:01.678821 4593 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9194cbfb_27b9_47e8_90eb_64b9391d0b07.slice/crio-24a48dac79f9737c737a9a6c9feb17fa992bbfb7616bde5d11369bae535e02b2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9194cbfb_27b9_47e8_90eb_64b9391d0b07.slice/crio-conmon-24a48dac79f9737c737a9a6c9feb17fa992bbfb7616bde5d11369bae535e02b2.scope\": RecentStats: unable to find data in memory cache]" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.030070 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.168954 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pvlg\" (UniqueName: \"kubernetes.io/projected/9194cbfb-27b9-47e8-90eb-64b9391d0b07-kube-api-access-9pvlg\") pod \"9194cbfb-27b9-47e8-90eb-64b9391d0b07\" (UID: \"9194cbfb-27b9-47e8-90eb-64b9391d0b07\") " Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.169012 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9194cbfb-27b9-47e8-90eb-64b9391d0b07-catalog-content\") pod \"9194cbfb-27b9-47e8-90eb-64b9391d0b07\" (UID: \"9194cbfb-27b9-47e8-90eb-64b9391d0b07\") " Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.169040 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9194cbfb-27b9-47e8-90eb-64b9391d0b07-utilities\") pod \"9194cbfb-27b9-47e8-90eb-64b9391d0b07\" (UID: \"9194cbfb-27b9-47e8-90eb-64b9391d0b07\") " Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.172219 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9194cbfb-27b9-47e8-90eb-64b9391d0b07-utilities" (OuterVolumeSpecName: "utilities") pod "9194cbfb-27b9-47e8-90eb-64b9391d0b07" (UID: "9194cbfb-27b9-47e8-90eb-64b9391d0b07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.207824 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9194cbfb-27b9-47e8-90eb-64b9391d0b07-kube-api-access-9pvlg" (OuterVolumeSpecName: "kube-api-access-9pvlg") pod "9194cbfb-27b9-47e8-90eb-64b9391d0b07" (UID: "9194cbfb-27b9-47e8-90eb-64b9391d0b07"). InnerVolumeSpecName "kube-api-access-9pvlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.272423 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pvlg\" (UniqueName: \"kubernetes.io/projected/9194cbfb-27b9-47e8-90eb-64b9391d0b07-kube-api-access-9pvlg\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.272461 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9194cbfb-27b9-47e8-90eb-64b9391d0b07-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.285308 4593 generic.go:334] "Generic (PLEG): container finished" podID="d4645d9f-a4ac-4004-b76e-8f3652a300e6" containerID="96f4460809918886f218fdb0369ac16533266e781abac3ab2236acb263eb30ab" exitCode=0 Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.285412 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" event={"ID":"d4645d9f-a4ac-4004-b76e-8f3652a300e6","Type":"ContainerDied","Data":"96f4460809918886f218fdb0369ac16533266e781abac3ab2236acb263eb30ab"} Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.285461 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" event={"ID":"d4645d9f-a4ac-4004-b76e-8f3652a300e6","Type":"ContainerStarted","Data":"c6f1f6dc4fba44b238c92a14ad6df982c542f3af9ec19723b99a766da8d106d2"} Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.320086 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9194cbfb-27b9-47e8-90eb-64b9391d0b07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9194cbfb-27b9-47e8-90eb-64b9391d0b07" (UID: "9194cbfb-27b9-47e8-90eb-64b9391d0b07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.330521 4593 generic.go:334] "Generic (PLEG): container finished" podID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerID="24a48dac79f9737c737a9a6c9feb17fa992bbfb7616bde5d11369bae535e02b2" exitCode=0 Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.330815 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4l8n" event={"ID":"9194cbfb-27b9-47e8-90eb-64b9391d0b07","Type":"ContainerDied","Data":"24a48dac79f9737c737a9a6c9feb17fa992bbfb7616bde5d11369bae535e02b2"} Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.330885 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4l8n" event={"ID":"9194cbfb-27b9-47e8-90eb-64b9391d0b07","Type":"ContainerDied","Data":"5ea6d9d61fd2cf95d30b451aea020cc55aa6add991037bc5209ce7d2a046ef7e"} Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.330912 4593 scope.go:117] "RemoveContainer" containerID="24a48dac79f9737c737a9a6c9feb17fa992bbfb7616bde5d11369bae535e02b2" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.331205 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.375907 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9194cbfb-27b9-47e8-90eb-64b9391d0b07-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.452981 4593 scope.go:117] "RemoveContainer" containerID="01cfca19ba6e7095e676495e45dbce66f5a74d2b87eaab4f83bc77de55811a95" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.456431 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k4l8n"] Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.468331 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k4l8n"] Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.506349 4593 scope.go:117] "RemoveContainer" containerID="193f9b95fdc94b467f23b2f72d7dfa0f28f6b17c0525596eef4f9076227ed84f" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.556253 4593 scope.go:117] "RemoveContainer" containerID="ba88dc4008912aff189fbe9ab60d1200804baf565d0d0ee6b15f03364bbef410" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.587883 4593 scope.go:117] "RemoveContainer" containerID="24a48dac79f9737c737a9a6c9feb17fa992bbfb7616bde5d11369bae535e02b2" Jan 29 11:21:02 crc kubenswrapper[4593]: E0129 11:21:02.590660 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24a48dac79f9737c737a9a6c9feb17fa992bbfb7616bde5d11369bae535e02b2\": container with ID starting with 24a48dac79f9737c737a9a6c9feb17fa992bbfb7616bde5d11369bae535e02b2 not found: ID does not exist" containerID="24a48dac79f9737c737a9a6c9feb17fa992bbfb7616bde5d11369bae535e02b2" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.590727 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24a48dac79f9737c737a9a6c9feb17fa992bbfb7616bde5d11369bae535e02b2"} err="failed to get container status \"24a48dac79f9737c737a9a6c9feb17fa992bbfb7616bde5d11369bae535e02b2\": rpc error: code = NotFound desc = could not find container \"24a48dac79f9737c737a9a6c9feb17fa992bbfb7616bde5d11369bae535e02b2\": container with ID starting with 24a48dac79f9737c737a9a6c9feb17fa992bbfb7616bde5d11369bae535e02b2 not found: ID does not exist" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.590755 4593 scope.go:117] "RemoveContainer" containerID="01cfca19ba6e7095e676495e45dbce66f5a74d2b87eaab4f83bc77de55811a95" Jan 29 11:21:02 crc kubenswrapper[4593]: E0129 11:21:02.591224 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01cfca19ba6e7095e676495e45dbce66f5a74d2b87eaab4f83bc77de55811a95\": container with ID starting with 01cfca19ba6e7095e676495e45dbce66f5a74d2b87eaab4f83bc77de55811a95 not found: ID does not exist" containerID="01cfca19ba6e7095e676495e45dbce66f5a74d2b87eaab4f83bc77de55811a95" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.591272 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01cfca19ba6e7095e676495e45dbce66f5a74d2b87eaab4f83bc77de55811a95"} err="failed to get container status \"01cfca19ba6e7095e676495e45dbce66f5a74d2b87eaab4f83bc77de55811a95\": rpc error: code = NotFound desc = could not find container \"01cfca19ba6e7095e676495e45dbce66f5a74d2b87eaab4f83bc77de55811a95\": container with ID starting with 01cfca19ba6e7095e676495e45dbce66f5a74d2b87eaab4f83bc77de55811a95 not found: ID does not exist" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.591291 4593 scope.go:117] "RemoveContainer" containerID="193f9b95fdc94b467f23b2f72d7dfa0f28f6b17c0525596eef4f9076227ed84f" Jan 29 11:21:02 crc kubenswrapper[4593]: E0129 11:21:02.591750 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"193f9b95fdc94b467f23b2f72d7dfa0f28f6b17c0525596eef4f9076227ed84f\": container with ID starting with 193f9b95fdc94b467f23b2f72d7dfa0f28f6b17c0525596eef4f9076227ed84f not found: ID does not exist" containerID="193f9b95fdc94b467f23b2f72d7dfa0f28f6b17c0525596eef4f9076227ed84f" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.591799 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"193f9b95fdc94b467f23b2f72d7dfa0f28f6b17c0525596eef4f9076227ed84f"} err="failed to get container status \"193f9b95fdc94b467f23b2f72d7dfa0f28f6b17c0525596eef4f9076227ed84f\": rpc error: code = NotFound desc = could not find container \"193f9b95fdc94b467f23b2f72d7dfa0f28f6b17c0525596eef4f9076227ed84f\": container with ID starting with 193f9b95fdc94b467f23b2f72d7dfa0f28f6b17c0525596eef4f9076227ed84f not found: ID does not exist" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.591831 4593 scope.go:117] "RemoveContainer" containerID="ba88dc4008912aff189fbe9ab60d1200804baf565d0d0ee6b15f03364bbef410" Jan 29 11:21:02 crc kubenswrapper[4593]: E0129 11:21:02.600228 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba88dc4008912aff189fbe9ab60d1200804baf565d0d0ee6b15f03364bbef410\": container with ID starting with ba88dc4008912aff189fbe9ab60d1200804baf565d0d0ee6b15f03364bbef410 not found: ID does not exist" containerID="ba88dc4008912aff189fbe9ab60d1200804baf565d0d0ee6b15f03364bbef410" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.600533 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba88dc4008912aff189fbe9ab60d1200804baf565d0d0ee6b15f03364bbef410"} err="failed to get container status \"ba88dc4008912aff189fbe9ab60d1200804baf565d0d0ee6b15f03364bbef410\": rpc error: code = NotFound desc = could not find container \"ba88dc4008912aff189fbe9ab60d1200804baf565d0d0ee6b15f03364bbef410\": container with ID starting with ba88dc4008912aff189fbe9ab60d1200804baf565d0d0ee6b15f03364bbef410 not found: ID does not exist" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.914866 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.954378 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.085406 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" path="/var/lib/kubelet/pods/9194cbfb-27b9-47e8-90eb-64b9391d0b07/volumes" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.346293 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" event={"ID":"d4645d9f-a4ac-4004-b76e-8f3652a300e6","Type":"ContainerStarted","Data":"479233623cb8278cfb48210dd03d033cd40365c2fb3ed9d12d89ee82e355d19d"} Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.346343 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.367334 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" podStartSLOduration=3.367308115 podStartE2EDuration="3.367308115s" podCreationTimestamp="2026-01-29 11:21:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:21:03.365531476 +0000 UTC m=+1329.238565667" watchObservedRunningTime="2026-01-29 11:21:03.367308115 +0000 UTC m=+1329.240342306" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.377298 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.522487 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.528079 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerName="sg-core" containerID="cri-o://ccb1cce5f72a27026fa0dff03cca969d96af413b780e118d7f695f65f57ee35b" gracePeriod=30 Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.528122 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerName="ceilometer-notification-agent" containerID="cri-o://1014e7c08fad200b51dc9f731c6b2a97edba268c54e461a9ca8ef7f2d5441a7f" gracePeriod=30 Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.528079 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerName="proxy-httpd" containerID="cri-o://5650870c53a815d139ee07b273db9e4da617bca758fc88b27ec7225ece9545c9" gracePeriod=30 Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.529216 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerName="ceilometer-central-agent" containerID="cri-o://718067f3b9f8669b499eaa09968b871882953292383cd9cadbaa67bc9b808050" gracePeriod=30 Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.569667 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-4klpz"] Jan 29 11:21:03 crc kubenswrapper[4593]: E0129 11:21:03.570088 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="extract-utilities" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.570108 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="extract-utilities" Jan 29 11:21:03 crc kubenswrapper[4593]: E0129 11:21:03.570124 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.570130 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" Jan 29 11:21:03 crc kubenswrapper[4593]: E0129 11:21:03.570146 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="extract-content" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.570153 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="extract-content" Jan 29 11:21:03 crc kubenswrapper[4593]: E0129 11:21:03.570174 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.570180 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.570373 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.570396 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.570406 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.571035 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4klpz" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.573143 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.573338 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.599911 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-4klpz"] Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.615298 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-config-data\") pod \"nova-cell1-cell-mapping-4klpz\" (UID: \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\") " pod="openstack/nova-cell1-cell-mapping-4klpz" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.615409 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4klpz\" (UID: \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\") " pod="openstack/nova-cell1-cell-mapping-4klpz" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.615541 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-scripts\") pod \"nova-cell1-cell-mapping-4klpz\" (UID: \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\") " pod="openstack/nova-cell1-cell-mapping-4klpz" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.615660 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgsmn\" (UniqueName: \"kubernetes.io/projected/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-kube-api-access-xgsmn\") pod \"nova-cell1-cell-mapping-4klpz\" (UID: \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\") " pod="openstack/nova-cell1-cell-mapping-4klpz" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.635474 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.635695 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ec186581-a9e6-46bb-9479-118d17b02d68" containerName="nova-api-log" containerID="cri-o://339836b893ef773758f5cc7b98358356a20301a3f047614f5c37232d52e5e9db" gracePeriod=30 Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.635978 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ec186581-a9e6-46bb-9479-118d17b02d68" containerName="nova-api-api" containerID="cri-o://e2dff2a6a81eaa182c9da8785f80eca46bc877d24f3ff7ddcedb8630f6e64bf5" gracePeriod=30 Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.717592 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-scripts\") pod \"nova-cell1-cell-mapping-4klpz\" (UID: \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\") " pod="openstack/nova-cell1-cell-mapping-4klpz" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.717765 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgsmn\" (UniqueName: \"kubernetes.io/projected/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-kube-api-access-xgsmn\") pod \"nova-cell1-cell-mapping-4klpz\" (UID: \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\") " pod="openstack/nova-cell1-cell-mapping-4klpz" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.717819 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-config-data\") pod \"nova-cell1-cell-mapping-4klpz\" (UID: \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\") " pod="openstack/nova-cell1-cell-mapping-4klpz" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.717866 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4klpz\" (UID: \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\") " pod="openstack/nova-cell1-cell-mapping-4klpz" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.723615 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4klpz\" (UID: \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\") " pod="openstack/nova-cell1-cell-mapping-4klpz" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.724572 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-config-data\") pod \"nova-cell1-cell-mapping-4klpz\" (UID: \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\") " pod="openstack/nova-cell1-cell-mapping-4klpz" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.732973 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-scripts\") pod \"nova-cell1-cell-mapping-4klpz\" (UID: \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\") " pod="openstack/nova-cell1-cell-mapping-4klpz" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.742181 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgsmn\" (UniqueName: \"kubernetes.io/projected/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-kube-api-access-xgsmn\") pod \"nova-cell1-cell-mapping-4klpz\" (UID: \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\") " pod="openstack/nova-cell1-cell-mapping-4klpz" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.937194 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4klpz" Jan 29 11:21:04 crc kubenswrapper[4593]: I0129 11:21:04.359020 4593 generic.go:334] "Generic (PLEG): container finished" podID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerID="5650870c53a815d139ee07b273db9e4da617bca758fc88b27ec7225ece9545c9" exitCode=0 Jan 29 11:21:04 crc kubenswrapper[4593]: I0129 11:21:04.359428 4593 generic.go:334] "Generic (PLEG): container finished" podID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerID="ccb1cce5f72a27026fa0dff03cca969d96af413b780e118d7f695f65f57ee35b" exitCode=2 Jan 29 11:21:04 crc kubenswrapper[4593]: I0129 11:21:04.359441 4593 generic.go:334] "Generic (PLEG): container finished" podID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerID="718067f3b9f8669b499eaa09968b871882953292383cd9cadbaa67bc9b808050" exitCode=0 Jan 29 11:21:04 crc kubenswrapper[4593]: I0129 11:21:04.359205 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"934ccdca-f1e6-43d2-af69-2efb205bf387","Type":"ContainerDied","Data":"5650870c53a815d139ee07b273db9e4da617bca758fc88b27ec7225ece9545c9"} Jan 29 11:21:04 crc kubenswrapper[4593]: I0129 11:21:04.359552 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"934ccdca-f1e6-43d2-af69-2efb205bf387","Type":"ContainerDied","Data":"ccb1cce5f72a27026fa0dff03cca969d96af413b780e118d7f695f65f57ee35b"} Jan 29 11:21:04 crc kubenswrapper[4593]: I0129 11:21:04.359569 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"934ccdca-f1e6-43d2-af69-2efb205bf387","Type":"ContainerDied","Data":"718067f3b9f8669b499eaa09968b871882953292383cd9cadbaa67bc9b808050"} Jan 29 11:21:04 crc kubenswrapper[4593]: I0129 11:21:04.362925 4593 generic.go:334] "Generic (PLEG): container finished" podID="ec186581-a9e6-46bb-9479-118d17b02d68" containerID="339836b893ef773758f5cc7b98358356a20301a3f047614f5c37232d52e5e9db" exitCode=143 Jan 29 11:21:04 crc kubenswrapper[4593]: I0129 11:21:04.362980 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec186581-a9e6-46bb-9479-118d17b02d68","Type":"ContainerDied","Data":"339836b893ef773758f5cc7b98358356a20301a3f047614f5c37232d52e5e9db"} Jan 29 11:21:04 crc kubenswrapper[4593]: I0129 11:21:04.480381 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-4klpz"] Jan 29 11:21:05 crc kubenswrapper[4593]: I0129 11:21:05.382559 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4klpz" event={"ID":"39f1974c-39c2-48ab-96f4-ad9b138bdd2a","Type":"ContainerStarted","Data":"1ea0d35aaa814eafe90d3b552ce2cc9ecd1b47dc4d9629fa6b4ad38749d52cc1"} Jan 29 11:21:05 crc kubenswrapper[4593]: I0129 11:21:05.383620 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4klpz" event={"ID":"39f1974c-39c2-48ab-96f4-ad9b138bdd2a","Type":"ContainerStarted","Data":"8d964d0f6fd7a3a0690290e5907b2f72debcae58f7a1f3f8fa117ebd225127d0"} Jan 29 11:21:05 crc kubenswrapper[4593]: I0129 11:21:05.415060 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-4klpz" podStartSLOduration=2.415033965 podStartE2EDuration="2.415033965s" podCreationTimestamp="2026-01-29 11:21:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:21:05.402487835 +0000 UTC m=+1331.275522026" watchObservedRunningTime="2026-01-29 11:21:05.415033965 +0000 UTC m=+1331.288068156" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.242238 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.315766 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec186581-a9e6-46bb-9479-118d17b02d68-config-data\") pod \"ec186581-a9e6-46bb-9479-118d17b02d68\" (UID: \"ec186581-a9e6-46bb-9479-118d17b02d68\") " Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.316865 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgrs2\" (UniqueName: \"kubernetes.io/projected/ec186581-a9e6-46bb-9479-118d17b02d68-kube-api-access-sgrs2\") pod \"ec186581-a9e6-46bb-9479-118d17b02d68\" (UID: \"ec186581-a9e6-46bb-9479-118d17b02d68\") " Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.317998 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec186581-a9e6-46bb-9479-118d17b02d68-logs\") pod \"ec186581-a9e6-46bb-9479-118d17b02d68\" (UID: \"ec186581-a9e6-46bb-9479-118d17b02d68\") " Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.318029 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec186581-a9e6-46bb-9479-118d17b02d68-combined-ca-bundle\") pod \"ec186581-a9e6-46bb-9479-118d17b02d68\" (UID: \"ec186581-a9e6-46bb-9479-118d17b02d68\") " Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.320516 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec186581-a9e6-46bb-9479-118d17b02d68-logs" (OuterVolumeSpecName: "logs") pod "ec186581-a9e6-46bb-9479-118d17b02d68" (UID: "ec186581-a9e6-46bb-9479-118d17b02d68"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.324088 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec186581-a9e6-46bb-9479-118d17b02d68-kube-api-access-sgrs2" (OuterVolumeSpecName: "kube-api-access-sgrs2") pod "ec186581-a9e6-46bb-9479-118d17b02d68" (UID: "ec186581-a9e6-46bb-9479-118d17b02d68"). InnerVolumeSpecName "kube-api-access-sgrs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.395863 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec186581-a9e6-46bb-9479-118d17b02d68-config-data" (OuterVolumeSpecName: "config-data") pod "ec186581-a9e6-46bb-9479-118d17b02d68" (UID: "ec186581-a9e6-46bb-9479-118d17b02d68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.421962 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec186581-a9e6-46bb-9479-118d17b02d68-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.421989 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec186581-a9e6-46bb-9479-118d17b02d68-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.421999 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgrs2\" (UniqueName: \"kubernetes.io/projected/ec186581-a9e6-46bb-9479-118d17b02d68-kube-api-access-sgrs2\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.423896 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec186581-a9e6-46bb-9479-118d17b02d68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec186581-a9e6-46bb-9479-118d17b02d68" (UID: "ec186581-a9e6-46bb-9479-118d17b02d68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.424238 4593 generic.go:334] "Generic (PLEG): container finished" podID="ec186581-a9e6-46bb-9479-118d17b02d68" containerID="e2dff2a6a81eaa182c9da8785f80eca46bc877d24f3ff7ddcedb8630f6e64bf5" exitCode=0 Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.424299 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec186581-a9e6-46bb-9479-118d17b02d68","Type":"ContainerDied","Data":"e2dff2a6a81eaa182c9da8785f80eca46bc877d24f3ff7ddcedb8630f6e64bf5"} Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.424326 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec186581-a9e6-46bb-9479-118d17b02d68","Type":"ContainerDied","Data":"f7db3d2de4fdf878656547d9c3589d171005e852c5677ab4b1055551daeb9535"} Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.424342 4593 scope.go:117] "RemoveContainer" containerID="e2dff2a6a81eaa182c9da8785f80eca46bc877d24f3ff7ddcedb8630f6e64bf5" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.424375 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.497858 4593 scope.go:117] "RemoveContainer" containerID="339836b893ef773758f5cc7b98358356a20301a3f047614f5c37232d52e5e9db" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.502285 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.520988 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.524401 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec186581-a9e6-46bb-9479-118d17b02d68-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.553295 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 11:21:07 crc kubenswrapper[4593]: E0129 11:21:07.553810 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.553825 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" Jan 29 11:21:07 crc kubenswrapper[4593]: E0129 11:21:07.553847 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec186581-a9e6-46bb-9479-118d17b02d68" containerName="nova-api-log" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.553853 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec186581-a9e6-46bb-9479-118d17b02d68" containerName="nova-api-log" Jan 29 11:21:07 crc kubenswrapper[4593]: E0129 11:21:07.553863 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec186581-a9e6-46bb-9479-118d17b02d68" containerName="nova-api-api" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.553869 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec186581-a9e6-46bb-9479-118d17b02d68" containerName="nova-api-api" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.554037 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec186581-a9e6-46bb-9479-118d17b02d68" containerName="nova-api-log" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.554060 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec186581-a9e6-46bb-9479-118d17b02d68" containerName="nova-api-api" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.555129 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.561271 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.561491 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.565269 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.578416 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.579956 4593 scope.go:117] "RemoveContainer" containerID="e2dff2a6a81eaa182c9da8785f80eca46bc877d24f3ff7ddcedb8630f6e64bf5" Jan 29 11:21:07 crc kubenswrapper[4593]: E0129 11:21:07.580749 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2dff2a6a81eaa182c9da8785f80eca46bc877d24f3ff7ddcedb8630f6e64bf5\": container with ID starting with e2dff2a6a81eaa182c9da8785f80eca46bc877d24f3ff7ddcedb8630f6e64bf5 not found: ID does not exist" containerID="e2dff2a6a81eaa182c9da8785f80eca46bc877d24f3ff7ddcedb8630f6e64bf5" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.580788 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2dff2a6a81eaa182c9da8785f80eca46bc877d24f3ff7ddcedb8630f6e64bf5"} err="failed to get container status \"e2dff2a6a81eaa182c9da8785f80eca46bc877d24f3ff7ddcedb8630f6e64bf5\": rpc error: code = NotFound desc = could not find container \"e2dff2a6a81eaa182c9da8785f80eca46bc877d24f3ff7ddcedb8630f6e64bf5\": container with ID starting with e2dff2a6a81eaa182c9da8785f80eca46bc877d24f3ff7ddcedb8630f6e64bf5 not found: ID does not exist" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.580832 4593 scope.go:117] "RemoveContainer" containerID="339836b893ef773758f5cc7b98358356a20301a3f047614f5c37232d52e5e9db" Jan 29 11:21:07 crc kubenswrapper[4593]: E0129 11:21:07.581284 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"339836b893ef773758f5cc7b98358356a20301a3f047614f5c37232d52e5e9db\": container with ID starting with 339836b893ef773758f5cc7b98358356a20301a3f047614f5c37232d52e5e9db not found: ID does not exist" containerID="339836b893ef773758f5cc7b98358356a20301a3f047614f5c37232d52e5e9db" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.581318 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"339836b893ef773758f5cc7b98358356a20301a3f047614f5c37232d52e5e9db"} err="failed to get container status \"339836b893ef773758f5cc7b98358356a20301a3f047614f5c37232d52e5e9db\": rpc error: code = NotFound desc = could not find container \"339836b893ef773758f5cc7b98358356a20301a3f047614f5c37232d52e5e9db\": container with ID starting with 339836b893ef773758f5cc7b98358356a20301a3f047614f5c37232d52e5e9db not found: ID does not exist" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.625974 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5g6f\" (UniqueName: \"kubernetes.io/projected/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-kube-api-access-b5g6f\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.626080 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-config-data\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.626126 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-public-tls-certs\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.626157 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-logs\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.626223 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.626331 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.728490 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5g6f\" (UniqueName: \"kubernetes.io/projected/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-kube-api-access-b5g6f\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.728979 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-config-data\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.729025 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-public-tls-certs\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.729047 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-logs\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.729072 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.729170 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.729888 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-logs\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.734310 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.735367 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.736143 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-public-tls-certs\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.736823 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-config-data\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.745757 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5g6f\" (UniqueName: \"kubernetes.io/projected/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-kube-api-access-b5g6f\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.897913 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.454551 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:21:08 crc kubenswrapper[4593]: W0129 11:21:08.476105 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode880ed3e_b1e4_40f6_bd7a_45b5e0e1c2b6.slice/crio-ae8e074c1c0c0dd530e330b0aefcc3c1e2e24788eaa38738b85e121e979bb77a WatchSource:0}: Error finding container ae8e074c1c0c0dd530e330b0aefcc3c1e2e24788eaa38738b85e121e979bb77a: Status 404 returned error can't find the container with id ae8e074c1c0c0dd530e330b0aefcc3c1e2e24788eaa38738b85e121e979bb77a Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.754380 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.853838 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-ceilometer-tls-certs\") pod \"934ccdca-f1e6-43d2-af69-2efb205bf387\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.853901 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-combined-ca-bundle\") pod \"934ccdca-f1e6-43d2-af69-2efb205bf387\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.853967 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-sg-core-conf-yaml\") pod \"934ccdca-f1e6-43d2-af69-2efb205bf387\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.853984 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-scripts\") pod \"934ccdca-f1e6-43d2-af69-2efb205bf387\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.854026 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/934ccdca-f1e6-43d2-af69-2efb205bf387-log-httpd\") pod \"934ccdca-f1e6-43d2-af69-2efb205bf387\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.854052 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/934ccdca-f1e6-43d2-af69-2efb205bf387-run-httpd\") pod \"934ccdca-f1e6-43d2-af69-2efb205bf387\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.854075 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q87z\" (UniqueName: \"kubernetes.io/projected/934ccdca-f1e6-43d2-af69-2efb205bf387-kube-api-access-9q87z\") pod \"934ccdca-f1e6-43d2-af69-2efb205bf387\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.854134 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-config-data\") pod \"934ccdca-f1e6-43d2-af69-2efb205bf387\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.863480 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/934ccdca-f1e6-43d2-af69-2efb205bf387-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "934ccdca-f1e6-43d2-af69-2efb205bf387" (UID: "934ccdca-f1e6-43d2-af69-2efb205bf387"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.864361 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/934ccdca-f1e6-43d2-af69-2efb205bf387-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "934ccdca-f1e6-43d2-af69-2efb205bf387" (UID: "934ccdca-f1e6-43d2-af69-2efb205bf387"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.873889 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/934ccdca-f1e6-43d2-af69-2efb205bf387-kube-api-access-9q87z" (OuterVolumeSpecName: "kube-api-access-9q87z") pod "934ccdca-f1e6-43d2-af69-2efb205bf387" (UID: "934ccdca-f1e6-43d2-af69-2efb205bf387"). InnerVolumeSpecName "kube-api-access-9q87z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.876100 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-scripts" (OuterVolumeSpecName: "scripts") pod "934ccdca-f1e6-43d2-af69-2efb205bf387" (UID: "934ccdca-f1e6-43d2-af69-2efb205bf387"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.956884 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.957206 4593 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/934ccdca-f1e6-43d2-af69-2efb205bf387-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.957314 4593 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/934ccdca-f1e6-43d2-af69-2efb205bf387-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.957410 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q87z\" (UniqueName: \"kubernetes.io/projected/934ccdca-f1e6-43d2-af69-2efb205bf387-kube-api-access-9q87z\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.963936 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "934ccdca-f1e6-43d2-af69-2efb205bf387" (UID: "934ccdca-f1e6-43d2-af69-2efb205bf387"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.997486 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "934ccdca-f1e6-43d2-af69-2efb205bf387" (UID: "934ccdca-f1e6-43d2-af69-2efb205bf387"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.065085 4593 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.065127 4593 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.072017 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-config-data" (OuterVolumeSpecName: "config-data") pod "934ccdca-f1e6-43d2-af69-2efb205bf387" (UID: "934ccdca-f1e6-43d2-af69-2efb205bf387"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.077472 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "934ccdca-f1e6-43d2-af69-2efb205bf387" (UID: "934ccdca-f1e6-43d2-af69-2efb205bf387"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.099574 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec186581-a9e6-46bb-9479-118d17b02d68" path="/var/lib/kubelet/pods/ec186581-a9e6-46bb-9479-118d17b02d68/volumes" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.167058 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.167318 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.448212 4593 generic.go:334] "Generic (PLEG): container finished" podID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerID="1014e7c08fad200b51dc9f731c6b2a97edba268c54e461a9ca8ef7f2d5441a7f" exitCode=0 Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.448538 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"934ccdca-f1e6-43d2-af69-2efb205bf387","Type":"ContainerDied","Data":"1014e7c08fad200b51dc9f731c6b2a97edba268c54e461a9ca8ef7f2d5441a7f"} Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.448565 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"934ccdca-f1e6-43d2-af69-2efb205bf387","Type":"ContainerDied","Data":"4dce39e9f6258739668c6759897048e09e8458a8965cc4d5beb204c4759ad763"} Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.448582 4593 scope.go:117] "RemoveContainer" containerID="5650870c53a815d139ee07b273db9e4da617bca758fc88b27ec7225ece9545c9" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.448722 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.453407 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6","Type":"ContainerStarted","Data":"879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1"} Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.453455 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6","Type":"ContainerStarted","Data":"55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b"} Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.453470 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6","Type":"ContainerStarted","Data":"ae8e074c1c0c0dd530e330b0aefcc3c1e2e24788eaa38738b85e121e979bb77a"} Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.483905 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.484250 4593 scope.go:117] "RemoveContainer" containerID="ccb1cce5f72a27026fa0dff03cca969d96af413b780e118d7f695f65f57ee35b" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.533542 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.535477 4593 scope.go:117] "RemoveContainer" containerID="1014e7c08fad200b51dc9f731c6b2a97edba268c54e461a9ca8ef7f2d5441a7f" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.540068 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.5400305469999997 podStartE2EDuration="2.540030547s" podCreationTimestamp="2026-01-29 11:21:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:21:09.517608339 +0000 UTC m=+1335.390642540" watchObservedRunningTime="2026-01-29 11:21:09.540030547 +0000 UTC m=+1335.413064738" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.589782 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:21:09 crc kubenswrapper[4593]: E0129 11:21:09.590320 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerName="sg-core" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.590337 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerName="sg-core" Jan 29 11:21:09 crc kubenswrapper[4593]: E0129 11:21:09.590352 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerName="ceilometer-central-agent" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.590358 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerName="ceilometer-central-agent" Jan 29 11:21:09 crc kubenswrapper[4593]: E0129 11:21:09.590365 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerName="ceilometer-notification-agent" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.590371 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerName="ceilometer-notification-agent" Jan 29 11:21:09 crc kubenswrapper[4593]: E0129 11:21:09.590397 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerName="proxy-httpd" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.590403 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerName="proxy-httpd" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.590587 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerName="sg-core" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.590609 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerName="ceilometer-central-agent" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.590621 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerName="proxy-httpd" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.590647 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerName="ceilometer-notification-agent" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.592653 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.597070 4593 scope.go:117] "RemoveContainer" containerID="718067f3b9f8669b499eaa09968b871882953292383cd9cadbaa67bc9b808050" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.599213 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.599401 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.599497 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.601763 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.643920 4593 scope.go:117] "RemoveContainer" containerID="5650870c53a815d139ee07b273db9e4da617bca758fc88b27ec7225ece9545c9" Jan 29 11:21:09 crc kubenswrapper[4593]: E0129 11:21:09.644349 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5650870c53a815d139ee07b273db9e4da617bca758fc88b27ec7225ece9545c9\": container with ID starting with 5650870c53a815d139ee07b273db9e4da617bca758fc88b27ec7225ece9545c9 not found: ID does not exist" containerID="5650870c53a815d139ee07b273db9e4da617bca758fc88b27ec7225ece9545c9" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.644384 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5650870c53a815d139ee07b273db9e4da617bca758fc88b27ec7225ece9545c9"} err="failed to get container status \"5650870c53a815d139ee07b273db9e4da617bca758fc88b27ec7225ece9545c9\": rpc error: code = NotFound desc = could not find container \"5650870c53a815d139ee07b273db9e4da617bca758fc88b27ec7225ece9545c9\": container with ID starting with 5650870c53a815d139ee07b273db9e4da617bca758fc88b27ec7225ece9545c9 not found: ID does not exist" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.644406 4593 scope.go:117] "RemoveContainer" containerID="ccb1cce5f72a27026fa0dff03cca969d96af413b780e118d7f695f65f57ee35b" Jan 29 11:21:09 crc kubenswrapper[4593]: E0129 11:21:09.644764 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccb1cce5f72a27026fa0dff03cca969d96af413b780e118d7f695f65f57ee35b\": container with ID starting with ccb1cce5f72a27026fa0dff03cca969d96af413b780e118d7f695f65f57ee35b not found: ID does not exist" containerID="ccb1cce5f72a27026fa0dff03cca969d96af413b780e118d7f695f65f57ee35b" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.644808 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccb1cce5f72a27026fa0dff03cca969d96af413b780e118d7f695f65f57ee35b"} err="failed to get container status \"ccb1cce5f72a27026fa0dff03cca969d96af413b780e118d7f695f65f57ee35b\": rpc error: code = NotFound desc = could not find container \"ccb1cce5f72a27026fa0dff03cca969d96af413b780e118d7f695f65f57ee35b\": container with ID starting with ccb1cce5f72a27026fa0dff03cca969d96af413b780e118d7f695f65f57ee35b not found: ID does not exist" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.644838 4593 scope.go:117] "RemoveContainer" containerID="1014e7c08fad200b51dc9f731c6b2a97edba268c54e461a9ca8ef7f2d5441a7f" Jan 29 11:21:09 crc kubenswrapper[4593]: E0129 11:21:09.645142 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1014e7c08fad200b51dc9f731c6b2a97edba268c54e461a9ca8ef7f2d5441a7f\": container with ID starting with 1014e7c08fad200b51dc9f731c6b2a97edba268c54e461a9ca8ef7f2d5441a7f not found: ID does not exist" containerID="1014e7c08fad200b51dc9f731c6b2a97edba268c54e461a9ca8ef7f2d5441a7f" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.645176 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1014e7c08fad200b51dc9f731c6b2a97edba268c54e461a9ca8ef7f2d5441a7f"} err="failed to get container status \"1014e7c08fad200b51dc9f731c6b2a97edba268c54e461a9ca8ef7f2d5441a7f\": rpc error: code = NotFound desc = could not find container \"1014e7c08fad200b51dc9f731c6b2a97edba268c54e461a9ca8ef7f2d5441a7f\": container with ID starting with 1014e7c08fad200b51dc9f731c6b2a97edba268c54e461a9ca8ef7f2d5441a7f not found: ID does not exist" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.645193 4593 scope.go:117] "RemoveContainer" containerID="718067f3b9f8669b499eaa09968b871882953292383cd9cadbaa67bc9b808050" Jan 29 11:21:09 crc kubenswrapper[4593]: E0129 11:21:09.645477 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"718067f3b9f8669b499eaa09968b871882953292383cd9cadbaa67bc9b808050\": container with ID starting with 718067f3b9f8669b499eaa09968b871882953292383cd9cadbaa67bc9b808050 not found: ID does not exist" containerID="718067f3b9f8669b499eaa09968b871882953292383cd9cadbaa67bc9b808050" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.645505 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"718067f3b9f8669b499eaa09968b871882953292383cd9cadbaa67bc9b808050"} err="failed to get container status \"718067f3b9f8669b499eaa09968b871882953292383cd9cadbaa67bc9b808050\": rpc error: code = NotFound desc = could not find container \"718067f3b9f8669b499eaa09968b871882953292383cd9cadbaa67bc9b808050\": container with ID starting with 718067f3b9f8669b499eaa09968b871882953292383cd9cadbaa67bc9b808050 not found: ID does not exist" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.674377 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8581bb16-8d35-4521-8886-3c71554a3a4d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.674679 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8581bb16-8d35-4521-8886-3c71554a3a4d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.674831 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8581bb16-8d35-4521-8886-3c71554a3a4d-run-httpd\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.674942 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8581bb16-8d35-4521-8886-3c71554a3a4d-scripts\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.675050 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t9tw\" (UniqueName: \"kubernetes.io/projected/8581bb16-8d35-4521-8886-3c71554a3a4d-kube-api-access-6t9tw\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.675159 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8581bb16-8d35-4521-8886-3c71554a3a4d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.675252 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8581bb16-8d35-4521-8886-3c71554a3a4d-config-data\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.675330 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8581bb16-8d35-4521-8886-3c71554a3a4d-log-httpd\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.776773 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8581bb16-8d35-4521-8886-3c71554a3a4d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.777159 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8581bb16-8d35-4521-8886-3c71554a3a4d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.777244 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8581bb16-8d35-4521-8886-3c71554a3a4d-run-httpd\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.777369 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8581bb16-8d35-4521-8886-3c71554a3a4d-scripts\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.777490 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t9tw\" (UniqueName: \"kubernetes.io/projected/8581bb16-8d35-4521-8886-3c71554a3a4d-kube-api-access-6t9tw\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.777610 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8581bb16-8d35-4521-8886-3c71554a3a4d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.777819 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8581bb16-8d35-4521-8886-3c71554a3a4d-config-data\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.777942 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8581bb16-8d35-4521-8886-3c71554a3a4d-log-httpd\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.778148 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8581bb16-8d35-4521-8886-3c71554a3a4d-run-httpd\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.778417 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8581bb16-8d35-4521-8886-3c71554a3a4d-log-httpd\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.783280 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8581bb16-8d35-4521-8886-3c71554a3a4d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.783548 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8581bb16-8d35-4521-8886-3c71554a3a4d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.783889 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8581bb16-8d35-4521-8886-3c71554a3a4d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.786563 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8581bb16-8d35-4521-8886-3c71554a3a4d-scripts\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.788206 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8581bb16-8d35-4521-8886-3c71554a3a4d-config-data\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.801397 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t9tw\" (UniqueName: \"kubernetes.io/projected/8581bb16-8d35-4521-8886-3c71554a3a4d-kube-api-access-6t9tw\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.928928 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:21:10 crc kubenswrapper[4593]: I0129 11:21:10.451466 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:21:10 crc kubenswrapper[4593]: W0129 11:21:10.455465 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8581bb16_8d35_4521_8886_3c71554a3a4d.slice/crio-f6bd3a6530e1c82fd552581b2874e176186933aaefba9d871c4f8370d018c933 WatchSource:0}: Error finding container f6bd3a6530e1c82fd552581b2874e176186933aaefba9d871c4f8370d018c933: Status 404 returned error can't find the container with id f6bd3a6530e1c82fd552581b2874e176186933aaefba9d871c4f8370d018c933 Jan 29 11:21:10 crc kubenswrapper[4593]: I0129 11:21:10.894939 4593 scope.go:117] "RemoveContainer" containerID="2d726601a06f0f3b078ac9cfab32d3c08235958370c6a2e0cae055cc410e3e0d" Jan 29 11:21:10 crc kubenswrapper[4593]: I0129 11:21:10.940874 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.051444 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-bsx9x"] Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.051886 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" podUID="697e4dbe-9b00-4891-9456-f76cb9642401" containerName="dnsmasq-dns" containerID="cri-o://5c3d893d50de695f2752e97704ce1977c263a00d43a535d7cade0a1f98508eeb" gracePeriod=10 Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.095913 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" path="/var/lib/kubelet/pods/934ccdca-f1e6-43d2-af69-2efb205bf387/volumes" Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.478647 4593 generic.go:334] "Generic (PLEG): container finished" podID="697e4dbe-9b00-4891-9456-f76cb9642401" containerID="5c3d893d50de695f2752e97704ce1977c263a00d43a535d7cade0a1f98508eeb" exitCode=0 Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.478659 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" event={"ID":"697e4dbe-9b00-4891-9456-f76cb9642401","Type":"ContainerDied","Data":"5c3d893d50de695f2752e97704ce1977c263a00d43a535d7cade0a1f98508eeb"} Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.485820 4593 generic.go:334] "Generic (PLEG): container finished" podID="39f1974c-39c2-48ab-96f4-ad9b138bdd2a" containerID="1ea0d35aaa814eafe90d3b552ce2cc9ecd1b47dc4d9629fa6b4ad38749d52cc1" exitCode=0 Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.485895 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4klpz" event={"ID":"39f1974c-39c2-48ab-96f4-ad9b138bdd2a","Type":"ContainerDied","Data":"1ea0d35aaa814eafe90d3b552ce2cc9ecd1b47dc4d9629fa6b4ad38749d52cc1"} Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.489671 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8581bb16-8d35-4521-8886-3c71554a3a4d","Type":"ContainerStarted","Data":"f6bd3a6530e1c82fd552581b2874e176186933aaefba9d871c4f8370d018c933"} Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.577115 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.720309 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-config\") pod \"697e4dbe-9b00-4891-9456-f76cb9642401\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.720421 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-ovsdbserver-nb\") pod \"697e4dbe-9b00-4891-9456-f76cb9642401\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.720513 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-dns-swift-storage-0\") pod \"697e4dbe-9b00-4891-9456-f76cb9642401\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.720625 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-ovsdbserver-sb\") pod \"697e4dbe-9b00-4891-9456-f76cb9642401\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.720676 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-dns-svc\") pod \"697e4dbe-9b00-4891-9456-f76cb9642401\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.720724 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ppsw\" (UniqueName: \"kubernetes.io/projected/697e4dbe-9b00-4891-9456-f76cb9642401-kube-api-access-7ppsw\") pod \"697e4dbe-9b00-4891-9456-f76cb9642401\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.726620 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/697e4dbe-9b00-4891-9456-f76cb9642401-kube-api-access-7ppsw" (OuterVolumeSpecName: "kube-api-access-7ppsw") pod "697e4dbe-9b00-4891-9456-f76cb9642401" (UID: "697e4dbe-9b00-4891-9456-f76cb9642401"). InnerVolumeSpecName "kube-api-access-7ppsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.793550 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "697e4dbe-9b00-4891-9456-f76cb9642401" (UID: "697e4dbe-9b00-4891-9456-f76cb9642401"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.812812 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-config" (OuterVolumeSpecName: "config") pod "697e4dbe-9b00-4891-9456-f76cb9642401" (UID: "697e4dbe-9b00-4891-9456-f76cb9642401"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.819522 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "697e4dbe-9b00-4891-9456-f76cb9642401" (UID: "697e4dbe-9b00-4891-9456-f76cb9642401"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.822207 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ppsw\" (UniqueName: \"kubernetes.io/projected/697e4dbe-9b00-4891-9456-f76cb9642401-kube-api-access-7ppsw\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.822227 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.822237 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.822246 4593 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.825129 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "697e4dbe-9b00-4891-9456-f76cb9642401" (UID: "697e4dbe-9b00-4891-9456-f76cb9642401"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.840198 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "697e4dbe-9b00-4891-9456-f76cb9642401" (UID: "697e4dbe-9b00-4891-9456-f76cb9642401"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.923803 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.924987 4593 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:12 crc kubenswrapper[4593]: I0129 11:21:12.501412 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8581bb16-8d35-4521-8886-3c71554a3a4d","Type":"ContainerStarted","Data":"f022985901fafb0e1edf6beb865adbec3ab446e664ba4bce07baeda349fe8f88"} Jan 29 11:21:12 crc kubenswrapper[4593]: I0129 11:21:12.503420 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" event={"ID":"697e4dbe-9b00-4891-9456-f76cb9642401","Type":"ContainerDied","Data":"7eb448007e7f2f259e7551ed6226b778b13ff57e3f9a0c2ec212e1fb5e5be79a"} Jan 29 11:21:12 crc kubenswrapper[4593]: I0129 11:21:12.503467 4593 scope.go:117] "RemoveContainer" containerID="5c3d893d50de695f2752e97704ce1977c263a00d43a535d7cade0a1f98508eeb" Jan 29 11:21:12 crc kubenswrapper[4593]: I0129 11:21:12.503436 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:21:12 crc kubenswrapper[4593]: I0129 11:21:12.534610 4593 scope.go:117] "RemoveContainer" containerID="7393be6f52eedddb8f2e44100a437ddd9c4a6aceb5605fe268b7dc5e484c61b6" Jan 29 11:21:12 crc kubenswrapper[4593]: I0129 11:21:12.554575 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-bsx9x"] Jan 29 11:21:12 crc kubenswrapper[4593]: I0129 11:21:12.565225 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-bsx9x"] Jan 29 11:21:12 crc kubenswrapper[4593]: I0129 11:21:12.860355 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4klpz" Jan 29 11:21:12 crc kubenswrapper[4593]: I0129 11:21:12.992707 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-combined-ca-bundle\") pod \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\" (UID: \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\") " Jan 29 11:21:12 crc kubenswrapper[4593]: I0129 11:21:12.993651 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgsmn\" (UniqueName: \"kubernetes.io/projected/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-kube-api-access-xgsmn\") pod \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\" (UID: \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\") " Jan 29 11:21:12 crc kubenswrapper[4593]: I0129 11:21:12.993776 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-scripts\") pod \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\" (UID: \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\") " Jan 29 11:21:12 crc kubenswrapper[4593]: I0129 11:21:12.993881 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-config-data\") pod \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\" (UID: \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\") " Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.000125 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-kube-api-access-xgsmn" (OuterVolumeSpecName: "kube-api-access-xgsmn") pod "39f1974c-39c2-48ab-96f4-ad9b138bdd2a" (UID: "39f1974c-39c2-48ab-96f4-ad9b138bdd2a"). InnerVolumeSpecName "kube-api-access-xgsmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.001265 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-scripts" (OuterVolumeSpecName: "scripts") pod "39f1974c-39c2-48ab-96f4-ad9b138bdd2a" (UID: "39f1974c-39c2-48ab-96f4-ad9b138bdd2a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.024320 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-config-data" (OuterVolumeSpecName: "config-data") pod "39f1974c-39c2-48ab-96f4-ad9b138bdd2a" (UID: "39f1974c-39c2-48ab-96f4-ad9b138bdd2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.032534 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39f1974c-39c2-48ab-96f4-ad9b138bdd2a" (UID: "39f1974c-39c2-48ab-96f4-ad9b138bdd2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.091230 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="697e4dbe-9b00-4891-9456-f76cb9642401" path="/var/lib/kubelet/pods/697e4dbe-9b00-4891-9456-f76cb9642401/volumes" Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.101861 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.101910 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgsmn\" (UniqueName: \"kubernetes.io/projected/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-kube-api-access-xgsmn\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.101927 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.101943 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.518761 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8581bb16-8d35-4521-8886-3c71554a3a4d","Type":"ContainerStarted","Data":"d6fe9c8cef1aaf2e257ab06d4df70f87b85fb8c00f94feac5166cf1b6dd99b4e"} Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.519555 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8581bb16-8d35-4521-8886-3c71554a3a4d","Type":"ContainerStarted","Data":"de6839d22a803c3f2ec07740614bc85bfd6e56d1aa57f5f3ef20bc4f7ee3ad36"} Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.522076 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4klpz" event={"ID":"39f1974c-39c2-48ab-96f4-ad9b138bdd2a","Type":"ContainerDied","Data":"8d964d0f6fd7a3a0690290e5907b2f72debcae58f7a1f3f8fa117ebd225127d0"} Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.522212 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d964d0f6fd7a3a0690290e5907b2f72debcae58f7a1f3f8fa117ebd225127d0" Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.522381 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4klpz" Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.630531 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.630804 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" containerName="nova-api-log" containerID="cri-o://55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b" gracePeriod=30 Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.632214 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" containerName="nova-api-api" containerID="cri-o://879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1" gracePeriod=30 Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.667328 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.667572 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="40dd43f0-0621-4358-8019-b58cd5fbcc79" containerName="nova-scheduler-scheduler" containerID="cri-o://f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058" gracePeriod=30 Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.723066 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.723356 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="eaa00230-26f8-4fa7-b32c-994ec82a6ac4" containerName="nova-metadata-log" containerID="cri-o://24c6fe2689133cb0ec4931234ff5577d826f6e6f68c542687334c8d0dfe09c4c" gracePeriod=30 Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.723481 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="eaa00230-26f8-4fa7-b32c-994ec82a6ac4" containerName="nova-metadata-metadata" containerID="cri-o://cf41dd0fb5a7b655b2dfa2beee5825aad4a8df4c8f985e8aebe9c425662911df" gracePeriod=30 Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.294823 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.433822 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5g6f\" (UniqueName: \"kubernetes.io/projected/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-kube-api-access-b5g6f\") pod \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.433876 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-public-tls-certs\") pod \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.433922 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-internal-tls-certs\") pod \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.434041 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-logs\") pod \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.434135 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-combined-ca-bundle\") pod \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.434167 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-config-data\") pod \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.436429 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-logs" (OuterVolumeSpecName: "logs") pod "e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" (UID: "e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.464420 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-kube-api-access-b5g6f" (OuterVolumeSpecName: "kube-api-access-b5g6f") pod "e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" (UID: "e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6"). InnerVolumeSpecName "kube-api-access-b5g6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.479517 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" (UID: "e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.509949 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-config-data" (OuterVolumeSpecName: "config-data") pod "e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" (UID: "e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.514715 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" (UID: "e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.540896 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.540925 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.540935 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.540945 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5g6f\" (UniqueName: \"kubernetes.io/projected/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-kube-api-access-b5g6f\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.540953 4593 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.556658 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" (UID: "e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.560036 4593 generic.go:334] "Generic (PLEG): container finished" podID="eaa00230-26f8-4fa7-b32c-994ec82a6ac4" containerID="24c6fe2689133cb0ec4931234ff5577d826f6e6f68c542687334c8d0dfe09c4c" exitCode=143 Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.561295 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eaa00230-26f8-4fa7-b32c-994ec82a6ac4","Type":"ContainerDied","Data":"24c6fe2689133cb0ec4931234ff5577d826f6e6f68c542687334c8d0dfe09c4c"} Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.572810 4593 generic.go:334] "Generic (PLEG): container finished" podID="e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" containerID="879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1" exitCode=0 Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.572849 4593 generic.go:334] "Generic (PLEG): container finished" podID="e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" containerID="55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b" exitCode=143 Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.572875 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6","Type":"ContainerDied","Data":"879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1"} Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.572902 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6","Type":"ContainerDied","Data":"55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b"} Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.572913 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6","Type":"ContainerDied","Data":"ae8e074c1c0c0dd530e330b0aefcc3c1e2e24788eaa38738b85e121e979bb77a"} Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.572927 4593 scope.go:117] "RemoveContainer" containerID="879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.573054 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.645511 4593 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.649253 4593 scope.go:117] "RemoveContainer" containerID="55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.657343 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.678832 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.691242 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 11:21:14 crc kubenswrapper[4593]: E0129 11:21:14.693816 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" containerName="nova-api-log" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.693867 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" containerName="nova-api-log" Jan 29 11:21:14 crc kubenswrapper[4593]: E0129 11:21:14.693902 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="697e4dbe-9b00-4891-9456-f76cb9642401" containerName="dnsmasq-dns" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.693911 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="697e4dbe-9b00-4891-9456-f76cb9642401" containerName="dnsmasq-dns" Jan 29 11:21:14 crc kubenswrapper[4593]: E0129 11:21:14.693957 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" containerName="nova-api-api" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.693969 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" containerName="nova-api-api" Jan 29 11:21:14 crc kubenswrapper[4593]: E0129 11:21:14.693985 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="697e4dbe-9b00-4891-9456-f76cb9642401" containerName="init" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.693993 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="697e4dbe-9b00-4891-9456-f76cb9642401" containerName="init" Jan 29 11:21:14 crc kubenswrapper[4593]: E0129 11:21:14.694039 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39f1974c-39c2-48ab-96f4-ad9b138bdd2a" containerName="nova-manage" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.694049 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="39f1974c-39c2-48ab-96f4-ad9b138bdd2a" containerName="nova-manage" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.694441 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" containerName="nova-api-log" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.694462 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" containerName="nova-api-api" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.694476 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="697e4dbe-9b00-4891-9456-f76cb9642401" containerName="dnsmasq-dns" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.694524 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="39f1974c-39c2-48ab-96f4-ad9b138bdd2a" containerName="nova-manage" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.697953 4593 scope.go:117] "RemoveContainer" containerID="879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1" Jan 29 11:21:14 crc kubenswrapper[4593]: E0129 11:21:14.699181 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1\": container with ID starting with 879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1 not found: ID does not exist" containerID="879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.699245 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1"} err="failed to get container status \"879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1\": rpc error: code = NotFound desc = could not find container \"879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1\": container with ID starting with 879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1 not found: ID does not exist" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.699278 4593 scope.go:117] "RemoveContainer" containerID="55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.700986 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: E0129 11:21:14.704911 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b\": container with ID starting with 55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b not found: ID does not exist" containerID="55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.704987 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b"} err="failed to get container status \"55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b\": rpc error: code = NotFound desc = could not find container \"55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b\": container with ID starting with 55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b not found: ID does not exist" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.705045 4593 scope.go:117] "RemoveContainer" containerID="879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.707454 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.707775 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1"} err="failed to get container status \"879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1\": rpc error: code = NotFound desc = could not find container \"879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1\": container with ID starting with 879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1 not found: ID does not exist" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.707808 4593 scope.go:117] "RemoveContainer" containerID="55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.708078 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.708314 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b"} err="failed to get container status \"55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b\": rpc error: code = NotFound desc = could not find container \"55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b\": container with ID starting with 55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b not found: ID does not exist" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.708488 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.719301 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.747271 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d08c570-1374-4c5a-832e-c973d7a39796-public-tls-certs\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.747592 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mjrn\" (UniqueName: \"kubernetes.io/projected/0d08c570-1374-4c5a-832e-c973d7a39796-kube-api-access-2mjrn\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.747726 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d08c570-1374-4c5a-832e-c973d7a39796-logs\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.747886 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d08c570-1374-4c5a-832e-c973d7a39796-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.748000 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d08c570-1374-4c5a-832e-c973d7a39796-config-data\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.748207 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d08c570-1374-4c5a-832e-c973d7a39796-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.850009 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d08c570-1374-4c5a-832e-c973d7a39796-logs\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.850471 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d08c570-1374-4c5a-832e-c973d7a39796-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.850430 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d08c570-1374-4c5a-832e-c973d7a39796-logs\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.851415 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d08c570-1374-4c5a-832e-c973d7a39796-config-data\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.851857 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d08c570-1374-4c5a-832e-c973d7a39796-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.852297 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d08c570-1374-4c5a-832e-c973d7a39796-public-tls-certs\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.852668 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mjrn\" (UniqueName: \"kubernetes.io/projected/0d08c570-1374-4c5a-832e-c973d7a39796-kube-api-access-2mjrn\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.857147 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d08c570-1374-4c5a-832e-c973d7a39796-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.857788 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d08c570-1374-4c5a-832e-c973d7a39796-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.857928 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d08c570-1374-4c5a-832e-c973d7a39796-config-data\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.860433 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d08c570-1374-4c5a-832e-c973d7a39796-public-tls-certs\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.877557 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mjrn\" (UniqueName: \"kubernetes.io/projected/0d08c570-1374-4c5a-832e-c973d7a39796-kube-api-access-2mjrn\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:15 crc kubenswrapper[4593]: I0129 11:21:15.031847 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:21:15 crc kubenswrapper[4593]: I0129 11:21:15.145947 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" path="/var/lib/kubelet/pods/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6/volumes" Jan 29 11:21:15 crc kubenswrapper[4593]: I0129 11:21:15.601024 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:21:16 crc kubenswrapper[4593]: I0129 11:21:16.608897 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8581bb16-8d35-4521-8886-3c71554a3a4d","Type":"ContainerStarted","Data":"7a937c89fb9109b345f6f22c51e0a60188931bf44b81b647fac5bcc01cf19596"} Jan 29 11:21:16 crc kubenswrapper[4593]: I0129 11:21:16.609615 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 11:21:16 crc kubenswrapper[4593]: I0129 11:21:16.612669 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d08c570-1374-4c5a-832e-c973d7a39796","Type":"ContainerStarted","Data":"e1dc8489211673f5a24d00e649bbdc05dd87332bd16220a4800c62a5e142b3cd"} Jan 29 11:21:16 crc kubenswrapper[4593]: I0129 11:21:16.612728 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d08c570-1374-4c5a-832e-c973d7a39796","Type":"ContainerStarted","Data":"4f34a69cd0ccc8a436c33666880e5411f3eb6ba4b621cea8cd63c32738c221fa"} Jan 29 11:21:16 crc kubenswrapper[4593]: I0129 11:21:16.612741 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d08c570-1374-4c5a-832e-c973d7a39796","Type":"ContainerStarted","Data":"b49cf6bde7134a1d6381169e903a4ce3cd1d72b2b35b1a183bb95ec308c2979a"} Jan 29 11:21:16 crc kubenswrapper[4593]: I0129 11:21:16.643273 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.545222436 podStartE2EDuration="7.643249257s" podCreationTimestamp="2026-01-29 11:21:09 +0000 UTC" firstStartedPulling="2026-01-29 11:21:10.457790087 +0000 UTC m=+1336.330824278" lastFinishedPulling="2026-01-29 11:21:15.555816898 +0000 UTC m=+1341.428851099" observedRunningTime="2026-01-29 11:21:16.627261943 +0000 UTC m=+1342.500296144" watchObservedRunningTime="2026-01-29 11:21:16.643249257 +0000 UTC m=+1342.516283448" Jan 29 11:21:16 crc kubenswrapper[4593]: I0129 11:21:16.665801 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.665777607 podStartE2EDuration="2.665777607s" podCreationTimestamp="2026-01-29 11:21:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:21:16.656126695 +0000 UTC m=+1342.529160896" watchObservedRunningTime="2026-01-29 11:21:16.665777607 +0000 UTC m=+1342.538811798" Jan 29 11:21:17 crc kubenswrapper[4593]: E0129 11:21:17.218743 4593 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058 is running failed: container process not found" containerID="f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 11:21:17 crc kubenswrapper[4593]: E0129 11:21:17.221498 4593 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058 is running failed: container process not found" containerID="f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 11:21:17 crc kubenswrapper[4593]: E0129 11:21:17.221842 4593 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058 is running failed: container process not found" containerID="f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 11:21:17 crc kubenswrapper[4593]: E0129 11:21:17.221922 4593 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="40dd43f0-0621-4358-8019-b58cd5fbcc79" containerName="nova-scheduler-scheduler" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.343470 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.417565 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fww9l\" (UniqueName: \"kubernetes.io/projected/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-kube-api-access-fww9l\") pod \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.417833 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-config-data\") pod \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.417913 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-nova-metadata-tls-certs\") pod \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.418051 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-logs\") pod \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.418101 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-combined-ca-bundle\") pod \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.421779 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-logs" (OuterVolumeSpecName: "logs") pod "eaa00230-26f8-4fa7-b32c-994ec82a6ac4" (UID: "eaa00230-26f8-4fa7-b32c-994ec82a6ac4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.434832 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-kube-api-access-fww9l" (OuterVolumeSpecName: "kube-api-access-fww9l") pod "eaa00230-26f8-4fa7-b32c-994ec82a6ac4" (UID: "eaa00230-26f8-4fa7-b32c-994ec82a6ac4"). InnerVolumeSpecName "kube-api-access-fww9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.500430 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-config-data" (OuterVolumeSpecName: "config-data") pod "eaa00230-26f8-4fa7-b32c-994ec82a6ac4" (UID: "eaa00230-26f8-4fa7-b32c-994ec82a6ac4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.523470 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.523500 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.523509 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fww9l\" (UniqueName: \"kubernetes.io/projected/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-kube-api-access-fww9l\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.560839 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eaa00230-26f8-4fa7-b32c-994ec82a6ac4" (UID: "eaa00230-26f8-4fa7-b32c-994ec82a6ac4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.573185 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "eaa00230-26f8-4fa7-b32c-994ec82a6ac4" (UID: "eaa00230-26f8-4fa7-b32c-994ec82a6ac4"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.622484 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.625418 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.625447 4593 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.633461 4593 generic.go:334] "Generic (PLEG): container finished" podID="40dd43f0-0621-4358-8019-b58cd5fbcc79" containerID="f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058" exitCode=0 Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.633533 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"40dd43f0-0621-4358-8019-b58cd5fbcc79","Type":"ContainerDied","Data":"f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058"} Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.633568 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"40dd43f0-0621-4358-8019-b58cd5fbcc79","Type":"ContainerDied","Data":"c94ac2729f1f8331d111e95fa7df8974b6fcb7da88f692f7369227d26b750286"} Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.633592 4593 scope.go:117] "RemoveContainer" containerID="f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.633722 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.648436 4593 generic.go:334] "Generic (PLEG): container finished" podID="eaa00230-26f8-4fa7-b32c-994ec82a6ac4" containerID="cf41dd0fb5a7b655b2dfa2beee5825aad4a8df4c8f985e8aebe9c425662911df" exitCode=0 Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.648720 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.648793 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eaa00230-26f8-4fa7-b32c-994ec82a6ac4","Type":"ContainerDied","Data":"cf41dd0fb5a7b655b2dfa2beee5825aad4a8df4c8f985e8aebe9c425662911df"} Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.648818 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eaa00230-26f8-4fa7-b32c-994ec82a6ac4","Type":"ContainerDied","Data":"185a6935f58efd39bffafb91700164ea93f85ee3879bc888a2a51ac02343ec6a"} Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.676386 4593 scope.go:117] "RemoveContainer" containerID="f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058" Jan 29 11:21:17 crc kubenswrapper[4593]: E0129 11:21:17.677014 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058\": container with ID starting with f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058 not found: ID does not exist" containerID="f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.677061 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058"} err="failed to get container status \"f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058\": rpc error: code = NotFound desc = could not find container \"f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058\": container with ID starting with f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058 not found: ID does not exist" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.677089 4593 scope.go:117] "RemoveContainer" containerID="cf41dd0fb5a7b655b2dfa2beee5825aad4a8df4c8f985e8aebe9c425662911df" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.709837 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.724107 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.726699 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40dd43f0-0621-4358-8019-b58cd5fbcc79-config-data\") pod \"40dd43f0-0621-4358-8019-b58cd5fbcc79\" (UID: \"40dd43f0-0621-4358-8019-b58cd5fbcc79\") " Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.726875 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpt5m\" (UniqueName: \"kubernetes.io/projected/40dd43f0-0621-4358-8019-b58cd5fbcc79-kube-api-access-kpt5m\") pod \"40dd43f0-0621-4358-8019-b58cd5fbcc79\" (UID: \"40dd43f0-0621-4358-8019-b58cd5fbcc79\") " Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.726923 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40dd43f0-0621-4358-8019-b58cd5fbcc79-combined-ca-bundle\") pod \"40dd43f0-0621-4358-8019-b58cd5fbcc79\" (UID: \"40dd43f0-0621-4358-8019-b58cd5fbcc79\") " Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.738323 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40dd43f0-0621-4358-8019-b58cd5fbcc79-kube-api-access-kpt5m" (OuterVolumeSpecName: "kube-api-access-kpt5m") pod "40dd43f0-0621-4358-8019-b58cd5fbcc79" (UID: "40dd43f0-0621-4358-8019-b58cd5fbcc79"). InnerVolumeSpecName "kube-api-access-kpt5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.742194 4593 scope.go:117] "RemoveContainer" containerID="24c6fe2689133cb0ec4931234ff5577d826f6e6f68c542687334c8d0dfe09c4c" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.743047 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:21:17 crc kubenswrapper[4593]: E0129 11:21:17.743438 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40dd43f0-0621-4358-8019-b58cd5fbcc79" containerName="nova-scheduler-scheduler" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.743474 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="40dd43f0-0621-4358-8019-b58cd5fbcc79" containerName="nova-scheduler-scheduler" Jan 29 11:21:17 crc kubenswrapper[4593]: E0129 11:21:17.743494 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa00230-26f8-4fa7-b32c-994ec82a6ac4" containerName="nova-metadata-metadata" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.743501 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa00230-26f8-4fa7-b32c-994ec82a6ac4" containerName="nova-metadata-metadata" Jan 29 11:21:17 crc kubenswrapper[4593]: E0129 11:21:17.743529 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa00230-26f8-4fa7-b32c-994ec82a6ac4" containerName="nova-metadata-log" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.743552 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa00230-26f8-4fa7-b32c-994ec82a6ac4" containerName="nova-metadata-log" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.743756 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="40dd43f0-0621-4358-8019-b58cd5fbcc79" containerName="nova-scheduler-scheduler" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.743792 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaa00230-26f8-4fa7-b32c-994ec82a6ac4" containerName="nova-metadata-metadata" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.743808 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaa00230-26f8-4fa7-b32c-994ec82a6ac4" containerName="nova-metadata-log" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.751547 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.758438 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.759078 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.763846 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.788838 4593 scope.go:117] "RemoveContainer" containerID="cf41dd0fb5a7b655b2dfa2beee5825aad4a8df4c8f985e8aebe9c425662911df" Jan 29 11:21:17 crc kubenswrapper[4593]: E0129 11:21:17.789581 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf41dd0fb5a7b655b2dfa2beee5825aad4a8df4c8f985e8aebe9c425662911df\": container with ID starting with cf41dd0fb5a7b655b2dfa2beee5825aad4a8df4c8f985e8aebe9c425662911df not found: ID does not exist" containerID="cf41dd0fb5a7b655b2dfa2beee5825aad4a8df4c8f985e8aebe9c425662911df" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.789613 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf41dd0fb5a7b655b2dfa2beee5825aad4a8df4c8f985e8aebe9c425662911df"} err="failed to get container status \"cf41dd0fb5a7b655b2dfa2beee5825aad4a8df4c8f985e8aebe9c425662911df\": rpc error: code = NotFound desc = could not find container \"cf41dd0fb5a7b655b2dfa2beee5825aad4a8df4c8f985e8aebe9c425662911df\": container with ID starting with cf41dd0fb5a7b655b2dfa2beee5825aad4a8df4c8f985e8aebe9c425662911df not found: ID does not exist" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.789657 4593 scope.go:117] "RemoveContainer" containerID="24c6fe2689133cb0ec4931234ff5577d826f6e6f68c542687334c8d0dfe09c4c" Jan 29 11:21:17 crc kubenswrapper[4593]: E0129 11:21:17.790058 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24c6fe2689133cb0ec4931234ff5577d826f6e6f68c542687334c8d0dfe09c4c\": container with ID starting with 24c6fe2689133cb0ec4931234ff5577d826f6e6f68c542687334c8d0dfe09c4c not found: ID does not exist" containerID="24c6fe2689133cb0ec4931234ff5577d826f6e6f68c542687334c8d0dfe09c4c" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.790076 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24c6fe2689133cb0ec4931234ff5577d826f6e6f68c542687334c8d0dfe09c4c"} err="failed to get container status \"24c6fe2689133cb0ec4931234ff5577d826f6e6f68c542687334c8d0dfe09c4c\": rpc error: code = NotFound desc = could not find container \"24c6fe2689133cb0ec4931234ff5577d826f6e6f68c542687334c8d0dfe09c4c\": container with ID starting with 24c6fe2689133cb0ec4931234ff5577d826f6e6f68c542687334c8d0dfe09c4c not found: ID does not exist" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.798916 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40dd43f0-0621-4358-8019-b58cd5fbcc79-config-data" (OuterVolumeSpecName: "config-data") pod "40dd43f0-0621-4358-8019-b58cd5fbcc79" (UID: "40dd43f0-0621-4358-8019-b58cd5fbcc79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.829399 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ph92\" (UniqueName: \"kubernetes.io/projected/649faf5c-e6bb-4e3d-8cb5-28c57f100008-kube-api-access-8ph92\") pod \"nova-metadata-0\" (UID: \"649faf5c-e6bb-4e3d-8cb5-28c57f100008\") " pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.829478 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/649faf5c-e6bb-4e3d-8cb5-28c57f100008-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"649faf5c-e6bb-4e3d-8cb5-28c57f100008\") " pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.829517 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/649faf5c-e6bb-4e3d-8cb5-28c57f100008-logs\") pod \"nova-metadata-0\" (UID: \"649faf5c-e6bb-4e3d-8cb5-28c57f100008\") " pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.829590 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/649faf5c-e6bb-4e3d-8cb5-28c57f100008-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"649faf5c-e6bb-4e3d-8cb5-28c57f100008\") " pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.829613 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/649faf5c-e6bb-4e3d-8cb5-28c57f100008-config-data\") pod \"nova-metadata-0\" (UID: \"649faf5c-e6bb-4e3d-8cb5-28c57f100008\") " pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.829682 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpt5m\" (UniqueName: \"kubernetes.io/projected/40dd43f0-0621-4358-8019-b58cd5fbcc79-kube-api-access-kpt5m\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.829694 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40dd43f0-0621-4358-8019-b58cd5fbcc79-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.841829 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40dd43f0-0621-4358-8019-b58cd5fbcc79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40dd43f0-0621-4358-8019-b58cd5fbcc79" (UID: "40dd43f0-0621-4358-8019-b58cd5fbcc79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.930932 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ph92\" (UniqueName: \"kubernetes.io/projected/649faf5c-e6bb-4e3d-8cb5-28c57f100008-kube-api-access-8ph92\") pod \"nova-metadata-0\" (UID: \"649faf5c-e6bb-4e3d-8cb5-28c57f100008\") " pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.931499 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/649faf5c-e6bb-4e3d-8cb5-28c57f100008-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"649faf5c-e6bb-4e3d-8cb5-28c57f100008\") " pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.931656 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/649faf5c-e6bb-4e3d-8cb5-28c57f100008-logs\") pod \"nova-metadata-0\" (UID: \"649faf5c-e6bb-4e3d-8cb5-28c57f100008\") " pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.931808 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/649faf5c-e6bb-4e3d-8cb5-28c57f100008-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"649faf5c-e6bb-4e3d-8cb5-28c57f100008\") " pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.931887 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/649faf5c-e6bb-4e3d-8cb5-28c57f100008-config-data\") pod \"nova-metadata-0\" (UID: \"649faf5c-e6bb-4e3d-8cb5-28c57f100008\") " pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.932007 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40dd43f0-0621-4358-8019-b58cd5fbcc79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.932556 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/649faf5c-e6bb-4e3d-8cb5-28c57f100008-logs\") pod \"nova-metadata-0\" (UID: \"649faf5c-e6bb-4e3d-8cb5-28c57f100008\") " pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.935669 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/649faf5c-e6bb-4e3d-8cb5-28c57f100008-config-data\") pod \"nova-metadata-0\" (UID: \"649faf5c-e6bb-4e3d-8cb5-28c57f100008\") " pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.936131 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/649faf5c-e6bb-4e3d-8cb5-28c57f100008-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"649faf5c-e6bb-4e3d-8cb5-28c57f100008\") " pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.937115 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/649faf5c-e6bb-4e3d-8cb5-28c57f100008-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"649faf5c-e6bb-4e3d-8cb5-28c57f100008\") " pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.965234 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ph92\" (UniqueName: \"kubernetes.io/projected/649faf5c-e6bb-4e3d-8cb5-28c57f100008-kube-api-access-8ph92\") pod \"nova-metadata-0\" (UID: \"649faf5c-e6bb-4e3d-8cb5-28c57f100008\") " pod="openstack/nova-metadata-0" Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.089669 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.098681 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.099822 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.114332 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.119411 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.124033 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.128496 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.238135 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm9ll\" (UniqueName: \"kubernetes.io/projected/4eff0b9f-e2c4-4ae0-9b42-585f9141d740-kube-api-access-xm9ll\") pod \"nova-scheduler-0\" (UID: \"4eff0b9f-e2c4-4ae0-9b42-585f9141d740\") " pod="openstack/nova-scheduler-0" Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.238503 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eff0b9f-e2c4-4ae0-9b42-585f9141d740-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4eff0b9f-e2c4-4ae0-9b42-585f9141d740\") " pod="openstack/nova-scheduler-0" Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.238661 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eff0b9f-e2c4-4ae0-9b42-585f9141d740-config-data\") pod \"nova-scheduler-0\" (UID: \"4eff0b9f-e2c4-4ae0-9b42-585f9141d740\") " pod="openstack/nova-scheduler-0" Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.340842 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eff0b9f-e2c4-4ae0-9b42-585f9141d740-config-data\") pod \"nova-scheduler-0\" (UID: \"4eff0b9f-e2c4-4ae0-9b42-585f9141d740\") " pod="openstack/nova-scheduler-0" Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.340954 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm9ll\" (UniqueName: \"kubernetes.io/projected/4eff0b9f-e2c4-4ae0-9b42-585f9141d740-kube-api-access-xm9ll\") pod \"nova-scheduler-0\" (UID: \"4eff0b9f-e2c4-4ae0-9b42-585f9141d740\") " pod="openstack/nova-scheduler-0" Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.340982 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eff0b9f-e2c4-4ae0-9b42-585f9141d740-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4eff0b9f-e2c4-4ae0-9b42-585f9141d740\") " pod="openstack/nova-scheduler-0" Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.346978 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eff0b9f-e2c4-4ae0-9b42-585f9141d740-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4eff0b9f-e2c4-4ae0-9b42-585f9141d740\") " pod="openstack/nova-scheduler-0" Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.347159 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eff0b9f-e2c4-4ae0-9b42-585f9141d740-config-data\") pod \"nova-scheduler-0\" (UID: \"4eff0b9f-e2c4-4ae0-9b42-585f9141d740\") " pod="openstack/nova-scheduler-0" Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.365960 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm9ll\" (UniqueName: \"kubernetes.io/projected/4eff0b9f-e2c4-4ae0-9b42-585f9141d740-kube-api-access-xm9ll\") pod \"nova-scheduler-0\" (UID: \"4eff0b9f-e2c4-4ae0-9b42-585f9141d740\") " pod="openstack/nova-scheduler-0" Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.531726 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.576466 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.665597 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"649faf5c-e6bb-4e3d-8cb5-28c57f100008","Type":"ContainerStarted","Data":"21aef2da5eea28bdb4c686b164d4d33176d54adf3d3ed82af36c2ade08a857ca"} Jan 29 11:21:19 crc kubenswrapper[4593]: I0129 11:21:19.012927 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:21:19 crc kubenswrapper[4593]: W0129 11:21:19.016500 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4eff0b9f_e2c4_4ae0_9b42_585f9141d740.slice/crio-f4a523e045a2d45ac99d3f668d9667fd6319543b192cb872e4b9d66b1491015a WatchSource:0}: Error finding container f4a523e045a2d45ac99d3f668d9667fd6319543b192cb872e4b9d66b1491015a: Status 404 returned error can't find the container with id f4a523e045a2d45ac99d3f668d9667fd6319543b192cb872e4b9d66b1491015a Jan 29 11:21:19 crc kubenswrapper[4593]: I0129 11:21:19.088497 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40dd43f0-0621-4358-8019-b58cd5fbcc79" path="/var/lib/kubelet/pods/40dd43f0-0621-4358-8019-b58cd5fbcc79/volumes" Jan 29 11:21:19 crc kubenswrapper[4593]: I0129 11:21:19.089784 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaa00230-26f8-4fa7-b32c-994ec82a6ac4" path="/var/lib/kubelet/pods/eaa00230-26f8-4fa7-b32c-994ec82a6ac4/volumes" Jan 29 11:21:19 crc kubenswrapper[4593]: I0129 11:21:19.678164 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"649faf5c-e6bb-4e3d-8cb5-28c57f100008","Type":"ContainerStarted","Data":"4d773d722a5618f7389efdb82ee16c253498b2a7d6513aa8ff6b7f987f512d54"} Jan 29 11:21:19 crc kubenswrapper[4593]: I0129 11:21:19.678216 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"649faf5c-e6bb-4e3d-8cb5-28c57f100008","Type":"ContainerStarted","Data":"9f84d2fca65e709b5b83135138cd04706d337eba2d35b53a555fb6a431ad8831"} Jan 29 11:21:19 crc kubenswrapper[4593]: I0129 11:21:19.680267 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4eff0b9f-e2c4-4ae0-9b42-585f9141d740","Type":"ContainerStarted","Data":"dfbb4a0969380b9fadf88a508ec4f02f949105466a18e19478177689ef066784"} Jan 29 11:21:19 crc kubenswrapper[4593]: I0129 11:21:19.680312 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4eff0b9f-e2c4-4ae0-9b42-585f9141d740","Type":"ContainerStarted","Data":"f4a523e045a2d45ac99d3f668d9667fd6319543b192cb872e4b9d66b1491015a"} Jan 29 11:21:19 crc kubenswrapper[4593]: I0129 11:21:19.722850 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.722833169 podStartE2EDuration="1.722833169s" podCreationTimestamp="2026-01-29 11:21:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:21:19.718879202 +0000 UTC m=+1345.591913393" watchObservedRunningTime="2026-01-29 11:21:19.722833169 +0000 UTC m=+1345.595867360" Jan 29 11:21:19 crc kubenswrapper[4593]: I0129 11:21:19.723951 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.723944849 podStartE2EDuration="2.723944849s" podCreationTimestamp="2026-01-29 11:21:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:21:19.703840724 +0000 UTC m=+1345.576874925" watchObservedRunningTime="2026-01-29 11:21:19.723944849 +0000 UTC m=+1345.596979040" Jan 29 11:21:23 crc kubenswrapper[4593]: I0129 11:21:23.100140 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 11:21:23 crc kubenswrapper[4593]: I0129 11:21:23.101739 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 11:21:23 crc kubenswrapper[4593]: I0129 11:21:23.532012 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 11:21:25 crc kubenswrapper[4593]: I0129 11:21:25.032809 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 11:21:25 crc kubenswrapper[4593]: I0129 11:21:25.032881 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 11:21:26 crc kubenswrapper[4593]: I0129 11:21:26.052007 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0d08c570-1374-4c5a-832e-c973d7a39796" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.206:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:21:26 crc kubenswrapper[4593]: I0129 11:21:26.052030 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0d08c570-1374-4c5a-832e-c973d7a39796" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.206:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:21:28 crc kubenswrapper[4593]: I0129 11:21:28.100271 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 11:21:28 crc kubenswrapper[4593]: I0129 11:21:28.100665 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 11:21:28 crc kubenswrapper[4593]: I0129 11:21:28.531953 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 11:21:28 crc kubenswrapper[4593]: I0129 11:21:28.571653 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 11:21:28 crc kubenswrapper[4593]: I0129 11:21:28.792868 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 11:21:29 crc kubenswrapper[4593]: I0129 11:21:29.114809 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="649faf5c-e6bb-4e3d-8cb5-28c57f100008" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:21:29 crc kubenswrapper[4593]: I0129 11:21:29.114841 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="649faf5c-e6bb-4e3d-8cb5-28c57f100008" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:21:35 crc kubenswrapper[4593]: I0129 11:21:35.043446 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 11:21:35 crc kubenswrapper[4593]: I0129 11:21:35.044393 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 11:21:35 crc kubenswrapper[4593]: I0129 11:21:35.050232 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 11:21:35 crc kubenswrapper[4593]: I0129 11:21:35.054491 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 11:21:35 crc kubenswrapper[4593]: I0129 11:21:35.825868 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 11:21:35 crc kubenswrapper[4593]: I0129 11:21:35.832934 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 11:21:38 crc kubenswrapper[4593]: I0129 11:21:38.189187 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 11:21:38 crc kubenswrapper[4593]: I0129 11:21:38.199463 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 11:21:38 crc kubenswrapper[4593]: I0129 11:21:38.200022 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 11:21:38 crc kubenswrapper[4593]: I0129 11:21:38.866368 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 11:21:39 crc kubenswrapper[4593]: I0129 11:21:39.938489 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 11:21:49 crc kubenswrapper[4593]: I0129 11:21:49.663532 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 11:21:50 crc kubenswrapper[4593]: I0129 11:21:50.882155 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 11:21:55 crc kubenswrapper[4593]: I0129 11:21:55.452099 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" containerName="rabbitmq" containerID="cri-o://b4905f54e6b8f178fee9edd7eecf274cac9966dfb2e310545422ab1ab6e185c0" gracePeriod=604795 Jan 29 11:21:55 crc kubenswrapper[4593]: I0129 11:21:55.816212 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.94:5671: connect: connection refused" Jan 29 11:21:55 crc kubenswrapper[4593]: I0129 11:21:55.947081 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="db2ccd2b-429d-43e8-a674-fb5c2abb0754" containerName="rabbitmq" containerID="cri-o://a5f4f1ce8f769804b224118a6ef670e7ab165b034ee99bc6126f73ead60da112" gracePeriod=604795 Jan 29 11:21:56 crc kubenswrapper[4593]: I0129 11:21:56.259923 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="db2ccd2b-429d-43e8-a674-fb5c2abb0754" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.95:5671: connect: connection refused" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.093072 4593 generic.go:334] "Generic (PLEG): container finished" podID="f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" containerID="b4905f54e6b8f178fee9edd7eecf274cac9966dfb2e310545422ab1ab6e185c0" exitCode=0 Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.093556 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e","Type":"ContainerDied","Data":"b4905f54e6b8f178fee9edd7eecf274cac9966dfb2e310545422ab1ab6e185c0"} Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.279770 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.309199 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-config-data\") pod \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.309292 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gt4f\" (UniqueName: \"kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-kube-api-access-5gt4f\") pod \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.309391 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-plugins\") pod \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.309406 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-confd\") pod \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.309430 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-server-conf\") pod \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.309462 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-erlang-cookie-secret\") pod \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.309480 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-plugins-conf\") pod \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.309523 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-erlang-cookie\") pod \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.309608 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-tls\") pod \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.309649 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.309673 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-pod-info\") pod \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.310016 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" (UID: "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.310426 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" (UID: "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.317268 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" (UID: "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.339059 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-pod-info" (OuterVolumeSpecName: "pod-info") pod "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" (UID: "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.366018 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-kube-api-access-5gt4f" (OuterVolumeSpecName: "kube-api-access-5gt4f") pod "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" (UID: "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e"). InnerVolumeSpecName "kube-api-access-5gt4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.374672 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" (UID: "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.374900 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" (UID: "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.385012 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" (UID: "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.433180 4593 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.433211 4593 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.433221 4593 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.433234 4593 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.433245 4593 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.433271 4593 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.433279 4593 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-pod-info\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.433288 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gt4f\" (UniqueName: \"kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-kube-api-access-5gt4f\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.467984 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-config-data" (OuterVolumeSpecName: "config-data") pod "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" (UID: "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.475312 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-server-conf" (OuterVolumeSpecName: "server-conf") pod "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" (UID: "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.536849 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.536895 4593 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-server-conf\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.537509 4593 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.598094 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.622601 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" (UID: "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.642053 4593 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.642101 4593 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.743043 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/db2ccd2b-429d-43e8-a674-fb5c2abb0754-erlang-cookie-secret\") pod \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.743848 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-tls\") pod \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.744341 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-plugins-conf\") pod \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.744541 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-erlang-cookie\") pod \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.744656 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/db2ccd2b-429d-43e8-a674-fb5c2abb0754-pod-info\") pod \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.744680 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-server-conf\") pod \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.744703 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.744729 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "db2ccd2b-429d-43e8-a674-fb5c2abb0754" (UID: "db2ccd2b-429d-43e8-a674-fb5c2abb0754"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.744762 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pmxq\" (UniqueName: \"kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-kube-api-access-6pmxq\") pod \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.744817 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-confd\") pod \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.744846 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-config-data\") pod \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.744880 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-plugins\") pod \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.745761 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "db2ccd2b-429d-43e8-a674-fb5c2abb0754" (UID: "db2ccd2b-429d-43e8-a674-fb5c2abb0754"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.746231 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db2ccd2b-429d-43e8-a674-fb5c2abb0754-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "db2ccd2b-429d-43e8-a674-fb5c2abb0754" (UID: "db2ccd2b-429d-43e8-a674-fb5c2abb0754"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.748474 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "db2ccd2b-429d-43e8-a674-fb5c2abb0754" (UID: "db2ccd2b-429d-43e8-a674-fb5c2abb0754"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.749716 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "db2ccd2b-429d-43e8-a674-fb5c2abb0754" (UID: "db2ccd2b-429d-43e8-a674-fb5c2abb0754"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.751146 4593 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.751176 4593 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.751190 4593 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.751206 4593 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.751218 4593 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/db2ccd2b-429d-43e8-a674-fb5c2abb0754-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.752044 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-kube-api-access-6pmxq" (OuterVolumeSpecName: "kube-api-access-6pmxq") pod "db2ccd2b-429d-43e8-a674-fb5c2abb0754" (UID: "db2ccd2b-429d-43e8-a674-fb5c2abb0754"). InnerVolumeSpecName "kube-api-access-6pmxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.763064 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/db2ccd2b-429d-43e8-a674-fb5c2abb0754-pod-info" (OuterVolumeSpecName: "pod-info") pod "db2ccd2b-429d-43e8-a674-fb5c2abb0754" (UID: "db2ccd2b-429d-43e8-a674-fb5c2abb0754"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.780908 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "db2ccd2b-429d-43e8-a674-fb5c2abb0754" (UID: "db2ccd2b-429d-43e8-a674-fb5c2abb0754"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.808129 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-config-data" (OuterVolumeSpecName: "config-data") pod "db2ccd2b-429d-43e8-a674-fb5c2abb0754" (UID: "db2ccd2b-429d-43e8-a674-fb5c2abb0754"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.855127 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.855176 4593 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/db2ccd2b-429d-43e8-a674-fb5c2abb0754-pod-info\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.855212 4593 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.855227 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pmxq\" (UniqueName: \"kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-kube-api-access-6pmxq\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.897764 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d558885bc-vm2qn"] Jan 29 11:22:02 crc kubenswrapper[4593]: E0129 11:22:02.898193 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" containerName="rabbitmq" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.898211 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" containerName="rabbitmq" Jan 29 11:22:02 crc kubenswrapper[4593]: E0129 11:22:02.898228 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db2ccd2b-429d-43e8-a674-fb5c2abb0754" containerName="rabbitmq" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.898235 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="db2ccd2b-429d-43e8-a674-fb5c2abb0754" containerName="rabbitmq" Jan 29 11:22:02 crc kubenswrapper[4593]: E0129 11:22:02.898257 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db2ccd2b-429d-43e8-a674-fb5c2abb0754" containerName="setup-container" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.898263 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="db2ccd2b-429d-43e8-a674-fb5c2abb0754" containerName="setup-container" Jan 29 11:22:02 crc kubenswrapper[4593]: E0129 11:22:02.898276 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" containerName="setup-container" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.898283 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" containerName="setup-container" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.898462 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="db2ccd2b-429d-43e8-a674-fb5c2abb0754" containerName="rabbitmq" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.898482 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" containerName="rabbitmq" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.899483 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.902328 4593 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.907350 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.962447 4593 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.962553 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-vm2qn"] Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.995307 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-server-conf" (OuterVolumeSpecName: "server-conf") pod "db2ccd2b-429d-43e8-a674-fb5c2abb0754" (UID: "db2ccd2b-429d-43e8-a674-fb5c2abb0754"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.064931 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.064975 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.065034 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-dns-svc\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.065055 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn8nc\" (UniqueName: \"kubernetes.io/projected/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-kube-api-access-gn8nc\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.065117 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-config\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.065151 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.065232 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.065289 4593 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-server-conf\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.065414 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "db2ccd2b-429d-43e8-a674-fb5c2abb0754" (UID: "db2ccd2b-429d-43e8-a674-fb5c2abb0754"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.112007 4593 generic.go:334] "Generic (PLEG): container finished" podID="db2ccd2b-429d-43e8-a674-fb5c2abb0754" containerID="a5f4f1ce8f769804b224118a6ef670e7ab165b034ee99bc6126f73ead60da112" exitCode=0 Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.112050 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.112091 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"db2ccd2b-429d-43e8-a674-fb5c2abb0754","Type":"ContainerDied","Data":"a5f4f1ce8f769804b224118a6ef670e7ab165b034ee99bc6126f73ead60da112"} Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.112123 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"db2ccd2b-429d-43e8-a674-fb5c2abb0754","Type":"ContainerDied","Data":"5a494b5365040c8bc0ddefc581e932c4375131be0145147547aba83d5a596b24"} Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.112142 4593 scope.go:117] "RemoveContainer" containerID="a5f4f1ce8f769804b224118a6ef670e7ab165b034ee99bc6126f73ead60da112" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.118615 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e","Type":"ContainerDied","Data":"5d7fdf36d82144d193388373adf2f7188be08e39ae09d760625349b240578090"} Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.118668 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.157887 4593 scope.go:117] "RemoveContainer" containerID="6d261168add925568a421f585a6004956179df4396d9af74a221541b8db2b16f" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.169667 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.169718 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.169761 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-dns-svc\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.169782 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn8nc\" (UniqueName: \"kubernetes.io/projected/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-kube-api-access-gn8nc\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.169841 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-config\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.169874 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.169906 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.169955 4593 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.170740 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.171141 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-dns-svc\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.171538 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-config\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.171720 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.172104 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.172301 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.188407 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.211200 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.216520 4593 scope.go:117] "RemoveContainer" containerID="a5f4f1ce8f769804b224118a6ef670e7ab165b034ee99bc6126f73ead60da112" Jan 29 11:22:03 crc kubenswrapper[4593]: E0129 11:22:03.217446 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5f4f1ce8f769804b224118a6ef670e7ab165b034ee99bc6126f73ead60da112\": container with ID starting with a5f4f1ce8f769804b224118a6ef670e7ab165b034ee99bc6126f73ead60da112 not found: ID does not exist" containerID="a5f4f1ce8f769804b224118a6ef670e7ab165b034ee99bc6126f73ead60da112" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.217479 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5f4f1ce8f769804b224118a6ef670e7ab165b034ee99bc6126f73ead60da112"} err="failed to get container status \"a5f4f1ce8f769804b224118a6ef670e7ab165b034ee99bc6126f73ead60da112\": rpc error: code = NotFound desc = could not find container \"a5f4f1ce8f769804b224118a6ef670e7ab165b034ee99bc6126f73ead60da112\": container with ID starting with a5f4f1ce8f769804b224118a6ef670e7ab165b034ee99bc6126f73ead60da112 not found: ID does not exist" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.217501 4593 scope.go:117] "RemoveContainer" containerID="6d261168add925568a421f585a6004956179df4396d9af74a221541b8db2b16f" Jan 29 11:22:03 crc kubenswrapper[4593]: E0129 11:22:03.221282 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d261168add925568a421f585a6004956179df4396d9af74a221541b8db2b16f\": container with ID starting with 6d261168add925568a421f585a6004956179df4396d9af74a221541b8db2b16f not found: ID does not exist" containerID="6d261168add925568a421f585a6004956179df4396d9af74a221541b8db2b16f" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.221325 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d261168add925568a421f585a6004956179df4396d9af74a221541b8db2b16f"} err="failed to get container status \"6d261168add925568a421f585a6004956179df4396d9af74a221541b8db2b16f\": rpc error: code = NotFound desc = could not find container \"6d261168add925568a421f585a6004956179df4396d9af74a221541b8db2b16f\": container with ID starting with 6d261168add925568a421f585a6004956179df4396d9af74a221541b8db2b16f not found: ID does not exist" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.221346 4593 scope.go:117] "RemoveContainer" containerID="b4905f54e6b8f178fee9edd7eecf274cac9966dfb2e310545422ab1ab6e185c0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.225209 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn8nc\" (UniqueName: \"kubernetes.io/projected/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-kube-api-access-gn8nc\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.266859 4593 scope.go:117] "RemoveContainer" containerID="44978dbad6338f76a863bda910ccc44233b86b74e07d252f43136dd31d7cd624" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.271546 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.311116 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.340890 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.351225 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.355714 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: E0129 11:22:03.359443 4593 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb2ccd2b_429d_43e8_a674_fb5c2abb0754.slice/crio-5a494b5365040c8bc0ddefc581e932c4375131be0145147547aba83d5a596b24\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0f6d0a4_2543_4de8_a64e_f3ce4c2bb11e.slice/crio-5d7fdf36d82144d193388373adf2f7188be08e39ae09d760625349b240578090\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0f6d0a4_2543_4de8_a64e_f3ce4c2bb11e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb2ccd2b_429d_43e8_a674_fb5c2abb0754.slice\": RecentStats: unable to find data in memory cache]" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.363969 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.364105 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.364217 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.364411 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.364455 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.365186 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ck876" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.365289 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.388917 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.390458 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.393212 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.393906 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-ztnqn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.394196 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.394365 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.394483 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.397318 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.397427 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.401710 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.414117 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.481620 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/63184534-fd04-4ef9-9c56-de6c30745ec4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.481676 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cqm9\" (UniqueName: \"kubernetes.io/projected/66e64ba6-3c75-4430-9f03-0fe9dbb37459-kube-api-access-9cqm9\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.481706 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/63184534-fd04-4ef9-9c56-de6c30745ec4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.481724 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/66e64ba6-3c75-4430-9f03-0fe9dbb37459-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.481756 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/66e64ba6-3c75-4430-9f03-0fe9dbb37459-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.481782 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/63184534-fd04-4ef9-9c56-de6c30745ec4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.481812 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/63184534-fd04-4ef9-9c56-de6c30745ec4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.481831 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/66e64ba6-3c75-4430-9f03-0fe9dbb37459-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.481850 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/66e64ba6-3c75-4430-9f03-0fe9dbb37459-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.481866 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/63184534-fd04-4ef9-9c56-de6c30745ec4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.481894 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63184534-fd04-4ef9-9c56-de6c30745ec4-config-data\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.481924 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.481940 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/66e64ba6-3c75-4430-9f03-0fe9dbb37459-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.481967 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/66e64ba6-3c75-4430-9f03-0fe9dbb37459-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.481989 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/66e64ba6-3c75-4430-9f03-0fe9dbb37459-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.482008 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/63184534-fd04-4ef9-9c56-de6c30745ec4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.482031 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/63184534-fd04-4ef9-9c56-de6c30745ec4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.482046 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/63184534-fd04-4ef9-9c56-de6c30745ec4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.482066 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwdq4\" (UniqueName: \"kubernetes.io/projected/63184534-fd04-4ef9-9c56-de6c30745ec4-kube-api-access-hwdq4\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.482089 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66e64ba6-3c75-4430-9f03-0fe9dbb37459-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.482104 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/66e64ba6-3c75-4430-9f03-0fe9dbb37459-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.482135 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583548 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/63184534-fd04-4ef9-9c56-de6c30745ec4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583586 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/66e64ba6-3c75-4430-9f03-0fe9dbb37459-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583619 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/66e64ba6-3c75-4430-9f03-0fe9dbb37459-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583662 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/63184534-fd04-4ef9-9c56-de6c30745ec4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583693 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/63184534-fd04-4ef9-9c56-de6c30745ec4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583709 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/66e64ba6-3c75-4430-9f03-0fe9dbb37459-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583728 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/66e64ba6-3c75-4430-9f03-0fe9dbb37459-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583744 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/63184534-fd04-4ef9-9c56-de6c30745ec4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583769 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63184534-fd04-4ef9-9c56-de6c30745ec4-config-data\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583797 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583813 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/66e64ba6-3c75-4430-9f03-0fe9dbb37459-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583841 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/66e64ba6-3c75-4430-9f03-0fe9dbb37459-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583861 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/66e64ba6-3c75-4430-9f03-0fe9dbb37459-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583877 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/63184534-fd04-4ef9-9c56-de6c30745ec4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583897 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/63184534-fd04-4ef9-9c56-de6c30745ec4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583918 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/63184534-fd04-4ef9-9c56-de6c30745ec4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583940 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwdq4\" (UniqueName: \"kubernetes.io/projected/63184534-fd04-4ef9-9c56-de6c30745ec4-kube-api-access-hwdq4\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583962 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66e64ba6-3c75-4430-9f03-0fe9dbb37459-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583975 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/66e64ba6-3c75-4430-9f03-0fe9dbb37459-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.584005 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.584024 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/63184534-fd04-4ef9-9c56-de6c30745ec4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.584040 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cqm9\" (UniqueName: \"kubernetes.io/projected/66e64ba6-3c75-4430-9f03-0fe9dbb37459-kube-api-access-9cqm9\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.585329 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/63184534-fd04-4ef9-9c56-de6c30745ec4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.588670 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/66e64ba6-3c75-4430-9f03-0fe9dbb37459-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.589397 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/66e64ba6-3c75-4430-9f03-0fe9dbb37459-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.590056 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/63184534-fd04-4ef9-9c56-de6c30745ec4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.591319 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/63184534-fd04-4ef9-9c56-de6c30745ec4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.591617 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/66e64ba6-3c75-4430-9f03-0fe9dbb37459-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.598388 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66e64ba6-3c75-4430-9f03-0fe9dbb37459-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.598703 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.599342 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/66e64ba6-3c75-4430-9f03-0fe9dbb37459-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.599448 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.619734 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/66e64ba6-3c75-4430-9f03-0fe9dbb37459-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.620428 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/66e64ba6-3c75-4430-9f03-0fe9dbb37459-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.620818 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/63184534-fd04-4ef9-9c56-de6c30745ec4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.628307 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/63184534-fd04-4ef9-9c56-de6c30745ec4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.630200 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/63184534-fd04-4ef9-9c56-de6c30745ec4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.630536 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/66e64ba6-3c75-4430-9f03-0fe9dbb37459-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.632489 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63184534-fd04-4ef9-9c56-de6c30745ec4-config-data\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.647538 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cqm9\" (UniqueName: \"kubernetes.io/projected/66e64ba6-3c75-4430-9f03-0fe9dbb37459-kube-api-access-9cqm9\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.648317 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/66e64ba6-3c75-4430-9f03-0fe9dbb37459-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.649828 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/63184534-fd04-4ef9-9c56-de6c30745ec4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.705600 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwdq4\" (UniqueName: \"kubernetes.io/projected/63184534-fd04-4ef9-9c56-de6c30745ec4-kube-api-access-hwdq4\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.721351 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/63184534-fd04-4ef9-9c56-de6c30745ec4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.788544 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.798442 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.902162 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-vm2qn"] Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.994174 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 11:22:04 crc kubenswrapper[4593]: I0129 11:22:04.018015 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:04 crc kubenswrapper[4593]: I0129 11:22:04.141424 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-vm2qn" event={"ID":"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071","Type":"ContainerStarted","Data":"b8c6914ce6bbd8622ddb4421f17355f5778b3203bfad364b74e640dad724f7dd"} Jan 29 11:22:04 crc kubenswrapper[4593]: I0129 11:22:04.668168 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 11:22:04 crc kubenswrapper[4593]: I0129 11:22:04.771323 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 11:22:05 crc kubenswrapper[4593]: I0129 11:22:05.086739 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db2ccd2b-429d-43e8-a674-fb5c2abb0754" path="/var/lib/kubelet/pods/db2ccd2b-429d-43e8-a674-fb5c2abb0754/volumes" Jan 29 11:22:05 crc kubenswrapper[4593]: I0129 11:22:05.088033 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" path="/var/lib/kubelet/pods/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e/volumes" Jan 29 11:22:05 crc kubenswrapper[4593]: I0129 11:22:05.155116 4593 generic.go:334] "Generic (PLEG): container finished" podID="a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071" containerID="e722ce6843d516c6551831ac498dbd3bde5a4e0e97f571602928d818ca9dafaa" exitCode=0 Jan 29 11:22:05 crc kubenswrapper[4593]: I0129 11:22:05.156331 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-vm2qn" event={"ID":"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071","Type":"ContainerDied","Data":"e722ce6843d516c6551831ac498dbd3bde5a4e0e97f571602928d818ca9dafaa"} Jan 29 11:22:05 crc kubenswrapper[4593]: I0129 11:22:05.161400 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"63184534-fd04-4ef9-9c56-de6c30745ec4","Type":"ContainerStarted","Data":"91850b17c124d531934cd1d41292f78eceeecb5b1f93cdd3527be41eabefdc07"} Jan 29 11:22:05 crc kubenswrapper[4593]: I0129 11:22:05.164379 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"66e64ba6-3c75-4430-9f03-0fe9dbb37459","Type":"ContainerStarted","Data":"6b81389095008434927f0697d4d4568ed6334b5826b58593df7a630a1f127e84"} Jan 29 11:22:06 crc kubenswrapper[4593]: I0129 11:22:06.177434 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-vm2qn" event={"ID":"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071","Type":"ContainerStarted","Data":"d1b73be0194f7d001c2cbe9fbfefe9f7cd9bcd2022a016195305d71e903be0a5"} Jan 29 11:22:06 crc kubenswrapper[4593]: I0129 11:22:06.177876 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:06 crc kubenswrapper[4593]: I0129 11:22:06.212025 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d558885bc-vm2qn" podStartSLOduration=4.211999616 podStartE2EDuration="4.211999616s" podCreationTimestamp="2026-01-29 11:22:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:22:06.204909014 +0000 UTC m=+1392.077943205" watchObservedRunningTime="2026-01-29 11:22:06.211999616 +0000 UTC m=+1392.085033807" Jan 29 11:22:07 crc kubenswrapper[4593]: I0129 11:22:07.190429 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"66e64ba6-3c75-4430-9f03-0fe9dbb37459","Type":"ContainerStarted","Data":"fb5f6e8b858298de266fd1d35275745d1ef5ea779cdb71d6a175383173b07d5f"} Jan 29 11:22:07 crc kubenswrapper[4593]: I0129 11:22:07.193114 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"63184534-fd04-4ef9-9c56-de6c30745ec4","Type":"ContainerStarted","Data":"d31cda1918e987444533908c599c296c91f9ed31f8f512c214c26df676d4fcdc"} Jan 29 11:22:11 crc kubenswrapper[4593]: I0129 11:22:11.410307 4593 scope.go:117] "RemoveContainer" containerID="d6a963ebfb97713a0a7f5c7f7df33e57f221e22a4c463e45ec8292bcb918f3d4" Jan 29 11:22:11 crc kubenswrapper[4593]: I0129 11:22:11.445107 4593 scope.go:117] "RemoveContainer" containerID="bb01aea62e7547286b44d9743a913549a411ace53ed9b60fd827a2aca107007a" Jan 29 11:22:11 crc kubenswrapper[4593]: I0129 11:22:11.501731 4593 scope.go:117] "RemoveContainer" containerID="b731ce61732546e5002e6093b39d4676cefa4ead9d8427f5427a357a3a10832e" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.343875 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.436303 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-q9gws"] Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.438916 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" podUID="d4645d9f-a4ac-4004-b76e-8f3652a300e6" containerName="dnsmasq-dns" containerID="cri-o://479233623cb8278cfb48210dd03d033cd40365c2fb3ed9d12d89ee82e355d19d" gracePeriod=10 Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.681675 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67cb876dc9-mqmln"] Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.683527 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.784004 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67cb876dc9-mqmln"] Jan 29 11:22:13 crc kubenswrapper[4593]: E0129 11:22:13.789845 4593 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4645d9f_a4ac_4004_b76e_8f3652a300e6.slice/crio-conmon-479233623cb8278cfb48210dd03d033cd40365c2fb3ed9d12d89ee82e355d19d.scope\": RecentStats: unable to find data in memory cache]" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.834625 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-config\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.834707 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxmkc\" (UniqueName: \"kubernetes.io/projected/07012c75-f2fe-400a-b511-d0cc18a1ca9c-kube-api-access-xxmkc\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.834740 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-openstack-edpm-ipam\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.834758 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-ovsdbserver-nb\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.834803 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-dns-svc\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.834861 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-ovsdbserver-sb\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.834882 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-dns-swift-storage-0\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.939714 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-config\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.939849 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxmkc\" (UniqueName: \"kubernetes.io/projected/07012c75-f2fe-400a-b511-d0cc18a1ca9c-kube-api-access-xxmkc\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.939962 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-openstack-edpm-ipam\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.939995 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-ovsdbserver-nb\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.940064 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-dns-svc\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.940155 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-ovsdbserver-sb\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.940186 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-dns-swift-storage-0\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.941178 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-dns-swift-storage-0\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.941746 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-config\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.941758 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-ovsdbserver-nb\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.942270 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-openstack-edpm-ipam\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.942799 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-dns-svc\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.943091 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-ovsdbserver-sb\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.963079 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxmkc\" (UniqueName: \"kubernetes.io/projected/07012c75-f2fe-400a-b511-d0cc18a1ca9c-kube-api-access-xxmkc\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.087137 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.114305 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.254596 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-dns-svc\") pod \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.254762 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkqvv\" (UniqueName: \"kubernetes.io/projected/d4645d9f-a4ac-4004-b76e-8f3652a300e6-kube-api-access-lkqvv\") pod \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.254863 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-dns-swift-storage-0\") pod \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.254886 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-ovsdbserver-sb\") pod \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.255355 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-ovsdbserver-nb\") pod \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.255394 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-config\") pod \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.264747 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4645d9f-a4ac-4004-b76e-8f3652a300e6-kube-api-access-lkqvv" (OuterVolumeSpecName: "kube-api-access-lkqvv") pod "d4645d9f-a4ac-4004-b76e-8f3652a300e6" (UID: "d4645d9f-a4ac-4004-b76e-8f3652a300e6"). InnerVolumeSpecName "kube-api-access-lkqvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.282979 4593 generic.go:334] "Generic (PLEG): container finished" podID="d4645d9f-a4ac-4004-b76e-8f3652a300e6" containerID="479233623cb8278cfb48210dd03d033cd40365c2fb3ed9d12d89ee82e355d19d" exitCode=0 Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.283029 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" event={"ID":"d4645d9f-a4ac-4004-b76e-8f3652a300e6","Type":"ContainerDied","Data":"479233623cb8278cfb48210dd03d033cd40365c2fb3ed9d12d89ee82e355d19d"} Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.283065 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" event={"ID":"d4645d9f-a4ac-4004-b76e-8f3652a300e6","Type":"ContainerDied","Data":"c6f1f6dc4fba44b238c92a14ad6df982c542f3af9ec19723b99a766da8d106d2"} Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.283124 4593 scope.go:117] "RemoveContainer" containerID="479233623cb8278cfb48210dd03d033cd40365c2fb3ed9d12d89ee82e355d19d" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.283314 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.321885 4593 scope.go:117] "RemoveContainer" containerID="96f4460809918886f218fdb0369ac16533266e781abac3ab2236acb263eb30ab" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.356296 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d4645d9f-a4ac-4004-b76e-8f3652a300e6" (UID: "d4645d9f-a4ac-4004-b76e-8f3652a300e6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.362934 4593 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.362959 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkqvv\" (UniqueName: \"kubernetes.io/projected/d4645d9f-a4ac-4004-b76e-8f3652a300e6-kube-api-access-lkqvv\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.375080 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d4645d9f-a4ac-4004-b76e-8f3652a300e6" (UID: "d4645d9f-a4ac-4004-b76e-8f3652a300e6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.402791 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d4645d9f-a4ac-4004-b76e-8f3652a300e6" (UID: "d4645d9f-a4ac-4004-b76e-8f3652a300e6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.410660 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d4645d9f-a4ac-4004-b76e-8f3652a300e6" (UID: "d4645d9f-a4ac-4004-b76e-8f3652a300e6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.430915 4593 scope.go:117] "RemoveContainer" containerID="479233623cb8278cfb48210dd03d033cd40365c2fb3ed9d12d89ee82e355d19d" Jan 29 11:22:14 crc kubenswrapper[4593]: E0129 11:22:14.431432 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"479233623cb8278cfb48210dd03d033cd40365c2fb3ed9d12d89ee82e355d19d\": container with ID starting with 479233623cb8278cfb48210dd03d033cd40365c2fb3ed9d12d89ee82e355d19d not found: ID does not exist" containerID="479233623cb8278cfb48210dd03d033cd40365c2fb3ed9d12d89ee82e355d19d" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.431459 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"479233623cb8278cfb48210dd03d033cd40365c2fb3ed9d12d89ee82e355d19d"} err="failed to get container status \"479233623cb8278cfb48210dd03d033cd40365c2fb3ed9d12d89ee82e355d19d\": rpc error: code = NotFound desc = could not find container \"479233623cb8278cfb48210dd03d033cd40365c2fb3ed9d12d89ee82e355d19d\": container with ID starting with 479233623cb8278cfb48210dd03d033cd40365c2fb3ed9d12d89ee82e355d19d not found: ID does not exist" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.431484 4593 scope.go:117] "RemoveContainer" containerID="96f4460809918886f218fdb0369ac16533266e781abac3ab2236acb263eb30ab" Jan 29 11:22:14 crc kubenswrapper[4593]: E0129 11:22:14.434885 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96f4460809918886f218fdb0369ac16533266e781abac3ab2236acb263eb30ab\": container with ID starting with 96f4460809918886f218fdb0369ac16533266e781abac3ab2236acb263eb30ab not found: ID does not exist" containerID="96f4460809918886f218fdb0369ac16533266e781abac3ab2236acb263eb30ab" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.434913 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96f4460809918886f218fdb0369ac16533266e781abac3ab2236acb263eb30ab"} err="failed to get container status \"96f4460809918886f218fdb0369ac16533266e781abac3ab2236acb263eb30ab\": rpc error: code = NotFound desc = could not find container \"96f4460809918886f218fdb0369ac16533266e781abac3ab2236acb263eb30ab\": container with ID starting with 96f4460809918886f218fdb0369ac16533266e781abac3ab2236acb263eb30ab not found: ID does not exist" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.451291 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-config" (OuterVolumeSpecName: "config") pod "d4645d9f-a4ac-4004-b76e-8f3652a300e6" (UID: "d4645d9f-a4ac-4004-b76e-8f3652a300e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.472953 4593 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.472999 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.473014 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.473027 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.639435 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-q9gws"] Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.648522 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-q9gws"] Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.668055 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67cb876dc9-mqmln"] Jan 29 11:22:15 crc kubenswrapper[4593]: I0129 11:22:15.128595 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4645d9f-a4ac-4004-b76e-8f3652a300e6" path="/var/lib/kubelet/pods/d4645d9f-a4ac-4004-b76e-8f3652a300e6/volumes" Jan 29 11:22:15 crc kubenswrapper[4593]: I0129 11:22:15.292813 4593 generic.go:334] "Generic (PLEG): container finished" podID="07012c75-f2fe-400a-b511-d0cc18a1ca9c" containerID="966fbd7555bc4ff5cc929848b271c330469b2a65aade2cef4295d87e832c1a5a" exitCode=0 Jan 29 11:22:15 crc kubenswrapper[4593]: I0129 11:22:15.292861 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" event={"ID":"07012c75-f2fe-400a-b511-d0cc18a1ca9c","Type":"ContainerDied","Data":"966fbd7555bc4ff5cc929848b271c330469b2a65aade2cef4295d87e832c1a5a"} Jan 29 11:22:15 crc kubenswrapper[4593]: I0129 11:22:15.292905 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" event={"ID":"07012c75-f2fe-400a-b511-d0cc18a1ca9c","Type":"ContainerStarted","Data":"7a2fc4545d35d33c6e744dd171c7d20cf3bb835be3ee07db4caa68cdffd9347f"} Jan 29 11:22:16 crc kubenswrapper[4593]: I0129 11:22:16.307738 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" event={"ID":"07012c75-f2fe-400a-b511-d0cc18a1ca9c","Type":"ContainerStarted","Data":"eb30b54d4e438ba3a2e833ecaf77af7d70e8dedd0442a5914574f9e50d781c6e"} Jan 29 11:22:16 crc kubenswrapper[4593]: I0129 11:22:16.308126 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:16 crc kubenswrapper[4593]: I0129 11:22:16.332973 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" podStartSLOduration=3.332941761 podStartE2EDuration="3.332941761s" podCreationTimestamp="2026-01-29 11:22:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:22:16.32852188 +0000 UTC m=+1402.201556071" watchObservedRunningTime="2026-01-29 11:22:16.332941761 +0000 UTC m=+1402.205975952" Jan 29 11:22:24 crc kubenswrapper[4593]: I0129 11:22:24.087776 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:24 crc kubenswrapper[4593]: I0129 11:22:24.228820 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-vm2qn"] Jan 29 11:22:24 crc kubenswrapper[4593]: I0129 11:22:24.229112 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d558885bc-vm2qn" podUID="a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071" containerName="dnsmasq-dns" containerID="cri-o://d1b73be0194f7d001c2cbe9fbfefe9f7cd9bcd2022a016195305d71e903be0a5" gracePeriod=10 Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.208015 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.303465 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-config\") pod \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.303539 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-ovsdbserver-sb\") pod \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.303758 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-dns-swift-storage-0\") pod \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.303819 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-ovsdbserver-nb\") pod \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.303865 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn8nc\" (UniqueName: \"kubernetes.io/projected/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-kube-api-access-gn8nc\") pod \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.303926 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-dns-svc\") pod \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.303968 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-openstack-edpm-ipam\") pod \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.325495 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-kube-api-access-gn8nc" (OuterVolumeSpecName: "kube-api-access-gn8nc") pod "a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071" (UID: "a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071"). InnerVolumeSpecName "kube-api-access-gn8nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.368028 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071" (UID: "a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.377317 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071" (UID: "a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.380256 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071" (UID: "a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.395985 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071" (UID: "a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.396474 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-config" (OuterVolumeSpecName: "config") pod "a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071" (UID: "a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.406123 4593 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.406162 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn8nc\" (UniqueName: \"kubernetes.io/projected/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-kube-api-access-gn8nc\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.406174 4593 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.406183 4593 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.406192 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.406200 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.408613 4593 generic.go:334] "Generic (PLEG): container finished" podID="a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071" containerID="d1b73be0194f7d001c2cbe9fbfefe9f7cd9bcd2022a016195305d71e903be0a5" exitCode=0 Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.408669 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-vm2qn" event={"ID":"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071","Type":"ContainerDied","Data":"d1b73be0194f7d001c2cbe9fbfefe9f7cd9bcd2022a016195305d71e903be0a5"} Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.408869 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-vm2qn" event={"ID":"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071","Type":"ContainerDied","Data":"b8c6914ce6bbd8622ddb4421f17355f5778b3203bfad364b74e640dad724f7dd"} Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.408934 4593 scope.go:117] "RemoveContainer" containerID="d1b73be0194f7d001c2cbe9fbfefe9f7cd9bcd2022a016195305d71e903be0a5" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.408738 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.418088 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071" (UID: "a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.508328 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.514397 4593 scope.go:117] "RemoveContainer" containerID="e722ce6843d516c6551831ac498dbd3bde5a4e0e97f571602928d818ca9dafaa" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.543850 4593 scope.go:117] "RemoveContainer" containerID="d1b73be0194f7d001c2cbe9fbfefe9f7cd9bcd2022a016195305d71e903be0a5" Jan 29 11:22:25 crc kubenswrapper[4593]: E0129 11:22:25.544466 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1b73be0194f7d001c2cbe9fbfefe9f7cd9bcd2022a016195305d71e903be0a5\": container with ID starting with d1b73be0194f7d001c2cbe9fbfefe9f7cd9bcd2022a016195305d71e903be0a5 not found: ID does not exist" containerID="d1b73be0194f7d001c2cbe9fbfefe9f7cd9bcd2022a016195305d71e903be0a5" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.544535 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1b73be0194f7d001c2cbe9fbfefe9f7cd9bcd2022a016195305d71e903be0a5"} err="failed to get container status \"d1b73be0194f7d001c2cbe9fbfefe9f7cd9bcd2022a016195305d71e903be0a5\": rpc error: code = NotFound desc = could not find container \"d1b73be0194f7d001c2cbe9fbfefe9f7cd9bcd2022a016195305d71e903be0a5\": container with ID starting with d1b73be0194f7d001c2cbe9fbfefe9f7cd9bcd2022a016195305d71e903be0a5 not found: ID does not exist" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.544571 4593 scope.go:117] "RemoveContainer" containerID="e722ce6843d516c6551831ac498dbd3bde5a4e0e97f571602928d818ca9dafaa" Jan 29 11:22:25 crc kubenswrapper[4593]: E0129 11:22:25.545172 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e722ce6843d516c6551831ac498dbd3bde5a4e0e97f571602928d818ca9dafaa\": container with ID starting with e722ce6843d516c6551831ac498dbd3bde5a4e0e97f571602928d818ca9dafaa not found: ID does not exist" containerID="e722ce6843d516c6551831ac498dbd3bde5a4e0e97f571602928d818ca9dafaa" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.545315 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e722ce6843d516c6551831ac498dbd3bde5a4e0e97f571602928d818ca9dafaa"} err="failed to get container status \"e722ce6843d516c6551831ac498dbd3bde5a4e0e97f571602928d818ca9dafaa\": rpc error: code = NotFound desc = could not find container \"e722ce6843d516c6551831ac498dbd3bde5a4e0e97f571602928d818ca9dafaa\": container with ID starting with e722ce6843d516c6551831ac498dbd3bde5a4e0e97f571602928d818ca9dafaa not found: ID does not exist" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.753522 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-vm2qn"] Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.763503 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-vm2qn"] Jan 29 11:22:27 crc kubenswrapper[4593]: I0129 11:22:27.087404 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071" path="/var/lib/kubelet/pods/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071/volumes" Jan 29 11:22:33 crc kubenswrapper[4593]: I0129 11:22:33.946417 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:22:33 crc kubenswrapper[4593]: I0129 11:22:33.947137 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:22:38 crc kubenswrapper[4593]: I0129 11:22:38.577240 4593 generic.go:334] "Generic (PLEG): container finished" podID="66e64ba6-3c75-4430-9f03-0fe9dbb37459" containerID="fb5f6e8b858298de266fd1d35275745d1ef5ea779cdb71d6a175383173b07d5f" exitCode=0 Jan 29 11:22:38 crc kubenswrapper[4593]: I0129 11:22:38.577844 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"66e64ba6-3c75-4430-9f03-0fe9dbb37459","Type":"ContainerDied","Data":"fb5f6e8b858298de266fd1d35275745d1ef5ea779cdb71d6a175383173b07d5f"} Jan 29 11:22:39 crc kubenswrapper[4593]: I0129 11:22:39.588703 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"66e64ba6-3c75-4430-9f03-0fe9dbb37459","Type":"ContainerStarted","Data":"f0c1716909775e83461a904751462ca67b2b58527ce2987524c74d21fd94fd70"} Jan 29 11:22:39 crc kubenswrapper[4593]: I0129 11:22:39.589192 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:39 crc kubenswrapper[4593]: I0129 11:22:39.596134 4593 generic.go:334] "Generic (PLEG): container finished" podID="63184534-fd04-4ef9-9c56-de6c30745ec4" containerID="d31cda1918e987444533908c599c296c91f9ed31f8f512c214c26df676d4fcdc" exitCode=0 Jan 29 11:22:39 crc kubenswrapper[4593]: I0129 11:22:39.596204 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"63184534-fd04-4ef9-9c56-de6c30745ec4","Type":"ContainerDied","Data":"d31cda1918e987444533908c599c296c91f9ed31f8f512c214c26df676d4fcdc"} Jan 29 11:22:39 crc kubenswrapper[4593]: I0129 11:22:39.638271 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.638246018 podStartE2EDuration="36.638246018s" podCreationTimestamp="2026-01-29 11:22:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:22:39.633390366 +0000 UTC m=+1425.506424557" watchObservedRunningTime="2026-01-29 11:22:39.638246018 +0000 UTC m=+1425.511280209" Jan 29 11:22:40 crc kubenswrapper[4593]: I0129 11:22:40.608418 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"63184534-fd04-4ef9-9c56-de6c30745ec4","Type":"ContainerStarted","Data":"cede0cad0a000e524418d7a0cf0912537e7953c668c7ccbdb10f2a56ce41c175"} Jan 29 11:22:40 crc kubenswrapper[4593]: I0129 11:22:40.609590 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 29 11:22:40 crc kubenswrapper[4593]: I0129 11:22:40.643676 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.643628352 podStartE2EDuration="37.643628352s" podCreationTimestamp="2026-01-29 11:22:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:22:40.636678084 +0000 UTC m=+1426.509712285" watchObservedRunningTime="2026-01-29 11:22:40.643628352 +0000 UTC m=+1426.516662543" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.032797 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb"] Jan 29 11:22:47 crc kubenswrapper[4593]: E0129 11:22:47.033574 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071" containerName="dnsmasq-dns" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.033587 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071" containerName="dnsmasq-dns" Jan 29 11:22:47 crc kubenswrapper[4593]: E0129 11:22:47.033597 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4645d9f-a4ac-4004-b76e-8f3652a300e6" containerName="init" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.033603 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4645d9f-a4ac-4004-b76e-8f3652a300e6" containerName="init" Jan 29 11:22:47 crc kubenswrapper[4593]: E0129 11:22:47.033617 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071" containerName="init" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.033623 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071" containerName="init" Jan 29 11:22:47 crc kubenswrapper[4593]: E0129 11:22:47.033651 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4645d9f-a4ac-4004-b76e-8f3652a300e6" containerName="dnsmasq-dns" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.033659 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4645d9f-a4ac-4004-b76e-8f3652a300e6" containerName="dnsmasq-dns" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.033842 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071" containerName="dnsmasq-dns" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.033862 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4645d9f-a4ac-4004-b76e-8f3652a300e6" containerName="dnsmasq-dns" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.034430 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.037618 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w4p8f" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.038354 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.038386 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.041899 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.057898 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb\" (UID: \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.058197 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb\" (UID: \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.058285 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx9bg\" (UniqueName: \"kubernetes.io/projected/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-kube-api-access-mx9bg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb\" (UID: \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.058382 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb\" (UID: \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.063313 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb"] Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.160170 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb\" (UID: \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.160259 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb\" (UID: \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.160285 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb\" (UID: \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.160301 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx9bg\" (UniqueName: \"kubernetes.io/projected/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-kube-api-access-mx9bg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb\" (UID: \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.166050 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb\" (UID: \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.167292 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb\" (UID: \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.182504 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb\" (UID: \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.183134 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx9bg\" (UniqueName: \"kubernetes.io/projected/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-kube-api-access-mx9bg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb\" (UID: \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.355987 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" Jan 29 11:22:48 crc kubenswrapper[4593]: I0129 11:22:48.193981 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb"] Jan 29 11:22:48 crc kubenswrapper[4593]: W0129 11:22:48.209034 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3e4e3e3_1994_40a5_bab8_d84db2f44ddb.slice/crio-4dce58a5aa3bd2e461af589b8d719f1e5644830c22a908638317259e25587911 WatchSource:0}: Error finding container 4dce58a5aa3bd2e461af589b8d719f1e5644830c22a908638317259e25587911: Status 404 returned error can't find the container with id 4dce58a5aa3bd2e461af589b8d719f1e5644830c22a908638317259e25587911 Jan 29 11:22:48 crc kubenswrapper[4593]: I0129 11:22:48.695411 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" event={"ID":"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb","Type":"ContainerStarted","Data":"4dce58a5aa3bd2e461af589b8d719f1e5644830c22a908638317259e25587911"} Jan 29 11:22:53 crc kubenswrapper[4593]: I0129 11:22:53.998006 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 29 11:22:54 crc kubenswrapper[4593]: I0129 11:22:54.043393 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:23:00 crc kubenswrapper[4593]: I0129 11:23:00.066505 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:23:00 crc kubenswrapper[4593]: I0129 11:23:00.838962 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" event={"ID":"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb","Type":"ContainerStarted","Data":"b4256d122a9578d2ec330718f5347f9fbc13135f7a1bbc8107ea8d0b808b7e74"} Jan 29 11:23:00 crc kubenswrapper[4593]: I0129 11:23:00.864298 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" podStartSLOduration=2.01100882 podStartE2EDuration="13.864261808s" podCreationTimestamp="2026-01-29 11:22:47 +0000 UTC" firstStartedPulling="2026-01-29 11:22:48.211075209 +0000 UTC m=+1434.084109400" lastFinishedPulling="2026-01-29 11:23:00.064328197 +0000 UTC m=+1445.937362388" observedRunningTime="2026-01-29 11:23:00.854785202 +0000 UTC m=+1446.727819403" watchObservedRunningTime="2026-01-29 11:23:00.864261808 +0000 UTC m=+1446.737295999" Jan 29 11:23:03 crc kubenswrapper[4593]: I0129 11:23:03.946571 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:23:03 crc kubenswrapper[4593]: I0129 11:23:03.947233 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:23:15 crc kubenswrapper[4593]: I0129 11:23:15.995410 4593 generic.go:334] "Generic (PLEG): container finished" podID="c3e4e3e3-1994-40a5-bab8-d84db2f44ddb" containerID="b4256d122a9578d2ec330718f5347f9fbc13135f7a1bbc8107ea8d0b808b7e74" exitCode=0 Jan 29 11:23:15 crc kubenswrapper[4593]: I0129 11:23:15.995504 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" event={"ID":"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb","Type":"ContainerDied","Data":"b4256d122a9578d2ec330718f5347f9fbc13135f7a1bbc8107ea8d0b808b7e74"} Jan 29 11:23:17 crc kubenswrapper[4593]: I0129 11:23:17.527921 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" Jan 29 11:23:17 crc kubenswrapper[4593]: I0129 11:23:17.685087 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx9bg\" (UniqueName: \"kubernetes.io/projected/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-kube-api-access-mx9bg\") pod \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\" (UID: \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\") " Jan 29 11:23:17 crc kubenswrapper[4593]: I0129 11:23:17.685153 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-inventory\") pod \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\" (UID: \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\") " Jan 29 11:23:17 crc kubenswrapper[4593]: I0129 11:23:17.685227 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-ssh-key-openstack-edpm-ipam\") pod \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\" (UID: \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\") " Jan 29 11:23:17 crc kubenswrapper[4593]: I0129 11:23:17.685452 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-repo-setup-combined-ca-bundle\") pod \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\" (UID: \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\") " Jan 29 11:23:17 crc kubenswrapper[4593]: I0129 11:23:17.706698 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "c3e4e3e3-1994-40a5-bab8-d84db2f44ddb" (UID: "c3e4e3e3-1994-40a5-bab8-d84db2f44ddb"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:23:17 crc kubenswrapper[4593]: I0129 11:23:17.715961 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-kube-api-access-mx9bg" (OuterVolumeSpecName: "kube-api-access-mx9bg") pod "c3e4e3e3-1994-40a5-bab8-d84db2f44ddb" (UID: "c3e4e3e3-1994-40a5-bab8-d84db2f44ddb"). InnerVolumeSpecName "kube-api-access-mx9bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:23:17 crc kubenswrapper[4593]: I0129 11:23:17.723893 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c3e4e3e3-1994-40a5-bab8-d84db2f44ddb" (UID: "c3e4e3e3-1994-40a5-bab8-d84db2f44ddb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:23:17 crc kubenswrapper[4593]: I0129 11:23:17.728266 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-inventory" (OuterVolumeSpecName: "inventory") pod "c3e4e3e3-1994-40a5-bab8-d84db2f44ddb" (UID: "c3e4e3e3-1994-40a5-bab8-d84db2f44ddb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:23:17 crc kubenswrapper[4593]: I0129 11:23:17.787616 4593 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:23:17 crc kubenswrapper[4593]: I0129 11:23:17.787691 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mx9bg\" (UniqueName: \"kubernetes.io/projected/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-kube-api-access-mx9bg\") on node \"crc\" DevicePath \"\"" Jan 29 11:23:17 crc kubenswrapper[4593]: I0129 11:23:17.787708 4593 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 11:23:17 crc kubenswrapper[4593]: I0129 11:23:17.787720 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.021863 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" event={"ID":"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb","Type":"ContainerDied","Data":"4dce58a5aa3bd2e461af589b8d719f1e5644830c22a908638317259e25587911"} Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.021910 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dce58a5aa3bd2e461af589b8d719f1e5644830c22a908638317259e25587911" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.021979 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.187279 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5"] Jan 29 11:23:18 crc kubenswrapper[4593]: E0129 11:23:18.187873 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3e4e3e3-1994-40a5-bab8-d84db2f44ddb" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.187899 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3e4e3e3-1994-40a5-bab8-d84db2f44ddb" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.188169 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3e4e3e3-1994-40a5-bab8-d84db2f44ddb" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.188988 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.195222 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.196130 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.196459 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.196455 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w4p8f" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.207003 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5"] Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.297688 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce80c16f-5109-46b9-9438-4f05a4132175-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7tzj5\" (UID: \"ce80c16f-5109-46b9-9438-4f05a4132175\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.297796 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxvtx\" (UniqueName: \"kubernetes.io/projected/ce80c16f-5109-46b9-9438-4f05a4132175-kube-api-access-cxvtx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7tzj5\" (UID: \"ce80c16f-5109-46b9-9438-4f05a4132175\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.297829 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce80c16f-5109-46b9-9438-4f05a4132175-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7tzj5\" (UID: \"ce80c16f-5109-46b9-9438-4f05a4132175\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.399269 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce80c16f-5109-46b9-9438-4f05a4132175-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7tzj5\" (UID: \"ce80c16f-5109-46b9-9438-4f05a4132175\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.399423 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce80c16f-5109-46b9-9438-4f05a4132175-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7tzj5\" (UID: \"ce80c16f-5109-46b9-9438-4f05a4132175\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.399448 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxvtx\" (UniqueName: \"kubernetes.io/projected/ce80c16f-5109-46b9-9438-4f05a4132175-kube-api-access-cxvtx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7tzj5\" (UID: \"ce80c16f-5109-46b9-9438-4f05a4132175\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.405104 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce80c16f-5109-46b9-9438-4f05a4132175-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7tzj5\" (UID: \"ce80c16f-5109-46b9-9438-4f05a4132175\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.411140 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce80c16f-5109-46b9-9438-4f05a4132175-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7tzj5\" (UID: \"ce80c16f-5109-46b9-9438-4f05a4132175\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.426468 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxvtx\" (UniqueName: \"kubernetes.io/projected/ce80c16f-5109-46b9-9438-4f05a4132175-kube-api-access-cxvtx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7tzj5\" (UID: \"ce80c16f-5109-46b9-9438-4f05a4132175\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.522401 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" Jan 29 11:23:19 crc kubenswrapper[4593]: I0129 11:23:19.298776 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5"] Jan 29 11:23:20 crc kubenswrapper[4593]: I0129 11:23:20.039897 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" event={"ID":"ce80c16f-5109-46b9-9438-4f05a4132175","Type":"ContainerStarted","Data":"8bb418a005f09c4d6aa7fb45209905c676a3ac1244c00e9b891a5a9b4387ad6a"} Jan 29 11:23:21 crc kubenswrapper[4593]: I0129 11:23:21.050624 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" event={"ID":"ce80c16f-5109-46b9-9438-4f05a4132175","Type":"ContainerStarted","Data":"faea85351cda05ece426a63e59c4f9ccd6e9b1955b988769b98202cd83285465"} Jan 29 11:23:21 crc kubenswrapper[4593]: I0129 11:23:21.082253 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" podStartSLOduration=2.574486239 podStartE2EDuration="3.082218381s" podCreationTimestamp="2026-01-29 11:23:18 +0000 UTC" firstStartedPulling="2026-01-29 11:23:19.307602993 +0000 UTC m=+1465.180637184" lastFinishedPulling="2026-01-29 11:23:19.815335135 +0000 UTC m=+1465.688369326" observedRunningTime="2026-01-29 11:23:21.072316373 +0000 UTC m=+1466.945350564" watchObservedRunningTime="2026-01-29 11:23:21.082218381 +0000 UTC m=+1466.955252582" Jan 29 11:23:23 crc kubenswrapper[4593]: I0129 11:23:23.069873 4593 generic.go:334] "Generic (PLEG): container finished" podID="ce80c16f-5109-46b9-9438-4f05a4132175" containerID="faea85351cda05ece426a63e59c4f9ccd6e9b1955b988769b98202cd83285465" exitCode=0 Jan 29 11:23:23 crc kubenswrapper[4593]: I0129 11:23:23.069924 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" event={"ID":"ce80c16f-5109-46b9-9438-4f05a4132175","Type":"ContainerDied","Data":"faea85351cda05ece426a63e59c4f9ccd6e9b1955b988769b98202cd83285465"} Jan 29 11:23:24 crc kubenswrapper[4593]: I0129 11:23:24.533866 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" Jan 29 11:23:24 crc kubenswrapper[4593]: I0129 11:23:24.631324 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce80c16f-5109-46b9-9438-4f05a4132175-inventory\") pod \"ce80c16f-5109-46b9-9438-4f05a4132175\" (UID: \"ce80c16f-5109-46b9-9438-4f05a4132175\") " Jan 29 11:23:24 crc kubenswrapper[4593]: I0129 11:23:24.631445 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce80c16f-5109-46b9-9438-4f05a4132175-ssh-key-openstack-edpm-ipam\") pod \"ce80c16f-5109-46b9-9438-4f05a4132175\" (UID: \"ce80c16f-5109-46b9-9438-4f05a4132175\") " Jan 29 11:23:24 crc kubenswrapper[4593]: I0129 11:23:24.631521 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxvtx\" (UniqueName: \"kubernetes.io/projected/ce80c16f-5109-46b9-9438-4f05a4132175-kube-api-access-cxvtx\") pod \"ce80c16f-5109-46b9-9438-4f05a4132175\" (UID: \"ce80c16f-5109-46b9-9438-4f05a4132175\") " Jan 29 11:23:24 crc kubenswrapper[4593]: I0129 11:23:24.641995 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce80c16f-5109-46b9-9438-4f05a4132175-kube-api-access-cxvtx" (OuterVolumeSpecName: "kube-api-access-cxvtx") pod "ce80c16f-5109-46b9-9438-4f05a4132175" (UID: "ce80c16f-5109-46b9-9438-4f05a4132175"). InnerVolumeSpecName "kube-api-access-cxvtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:23:24 crc kubenswrapper[4593]: I0129 11:23:24.662072 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce80c16f-5109-46b9-9438-4f05a4132175-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ce80c16f-5109-46b9-9438-4f05a4132175" (UID: "ce80c16f-5109-46b9-9438-4f05a4132175"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:23:24 crc kubenswrapper[4593]: I0129 11:23:24.675953 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce80c16f-5109-46b9-9438-4f05a4132175-inventory" (OuterVolumeSpecName: "inventory") pod "ce80c16f-5109-46b9-9438-4f05a4132175" (UID: "ce80c16f-5109-46b9-9438-4f05a4132175"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:23:24 crc kubenswrapper[4593]: I0129 11:23:24.733571 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxvtx\" (UniqueName: \"kubernetes.io/projected/ce80c16f-5109-46b9-9438-4f05a4132175-kube-api-access-cxvtx\") on node \"crc\" DevicePath \"\"" Jan 29 11:23:24 crc kubenswrapper[4593]: I0129 11:23:24.733608 4593 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce80c16f-5109-46b9-9438-4f05a4132175-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 11:23:24 crc kubenswrapper[4593]: I0129 11:23:24.733618 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce80c16f-5109-46b9-9438-4f05a4132175-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.088436 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" event={"ID":"ce80c16f-5109-46b9-9438-4f05a4132175","Type":"ContainerDied","Data":"8bb418a005f09c4d6aa7fb45209905c676a3ac1244c00e9b891a5a9b4387ad6a"} Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.088482 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bb418a005f09c4d6aa7fb45209905c676a3ac1244c00e9b891a5a9b4387ad6a" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.088505 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.188366 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz"] Jan 29 11:23:25 crc kubenswrapper[4593]: E0129 11:23:25.188900 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce80c16f-5109-46b9-9438-4f05a4132175" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.188922 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce80c16f-5109-46b9-9438-4f05a4132175" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.189119 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce80c16f-5109-46b9-9438-4f05a4132175" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.189822 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.191672 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w4p8f" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.192347 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.193300 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.196356 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.209963 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz"] Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.244688 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8jfj\" (UniqueName: \"kubernetes.io/projected/e4241343-d4f5-4690-972e-55f054a93f30-kube-api-access-s8jfj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz\" (UID: \"e4241343-d4f5-4690-972e-55f054a93f30\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.244816 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz\" (UID: \"e4241343-d4f5-4690-972e-55f054a93f30\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.244991 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz\" (UID: \"e4241343-d4f5-4690-972e-55f054a93f30\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.245142 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz\" (UID: \"e4241343-d4f5-4690-972e-55f054a93f30\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.346947 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz\" (UID: \"e4241343-d4f5-4690-972e-55f054a93f30\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.347376 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz\" (UID: \"e4241343-d4f5-4690-972e-55f054a93f30\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.347426 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8jfj\" (UniqueName: \"kubernetes.io/projected/e4241343-d4f5-4690-972e-55f054a93f30-kube-api-access-s8jfj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz\" (UID: \"e4241343-d4f5-4690-972e-55f054a93f30\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.347530 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz\" (UID: \"e4241343-d4f5-4690-972e-55f054a93f30\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.351621 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz\" (UID: \"e4241343-d4f5-4690-972e-55f054a93f30\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.354659 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz\" (UID: \"e4241343-d4f5-4690-972e-55f054a93f30\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.366253 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz\" (UID: \"e4241343-d4f5-4690-972e-55f054a93f30\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.376464 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8jfj\" (UniqueName: \"kubernetes.io/projected/e4241343-d4f5-4690-972e-55f054a93f30-kube-api-access-s8jfj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz\" (UID: \"e4241343-d4f5-4690-972e-55f054a93f30\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.505553 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" Jan 29 11:23:26 crc kubenswrapper[4593]: I0129 11:23:26.076672 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz"] Jan 29 11:23:26 crc kubenswrapper[4593]: I0129 11:23:26.104879 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" event={"ID":"e4241343-d4f5-4690-972e-55f054a93f30","Type":"ContainerStarted","Data":"927630ede3ceb2d2afac7670352e3381e678c1d8aa9b338fadd8176b90b8c0c9"} Jan 29 11:23:27 crc kubenswrapper[4593]: I0129 11:23:27.127880 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" event={"ID":"e4241343-d4f5-4690-972e-55f054a93f30","Type":"ContainerStarted","Data":"003e33f77ddab212895fe8ef3045f9e0f29137cf03f6bd5a01a49972f0f487bc"} Jan 29 11:23:27 crc kubenswrapper[4593]: I0129 11:23:27.169438 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" podStartSLOduration=1.735073285 podStartE2EDuration="2.169416847s" podCreationTimestamp="2026-01-29 11:23:25 +0000 UTC" firstStartedPulling="2026-01-29 11:23:26.073588786 +0000 UTC m=+1471.946622977" lastFinishedPulling="2026-01-29 11:23:26.507932348 +0000 UTC m=+1472.380966539" observedRunningTime="2026-01-29 11:23:27.150458763 +0000 UTC m=+1473.023492954" watchObservedRunningTime="2026-01-29 11:23:27.169416847 +0000 UTC m=+1473.042451038" Jan 29 11:23:33 crc kubenswrapper[4593]: I0129 11:23:33.945851 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:23:33 crc kubenswrapper[4593]: I0129 11:23:33.946216 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:23:33 crc kubenswrapper[4593]: I0129 11:23:33.946274 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 11:23:33 crc kubenswrapper[4593]: I0129 11:23:33.947057 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f628dc297b127220882a1d8752d50a08dc9b333c2a314b358e3c3d4a79bcfaa"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:23:33 crc kubenswrapper[4593]: I0129 11:23:33.947114 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://6f628dc297b127220882a1d8752d50a08dc9b333c2a314b358e3c3d4a79bcfaa" gracePeriod=600 Jan 29 11:23:35 crc kubenswrapper[4593]: I0129 11:23:35.203846 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="6f628dc297b127220882a1d8752d50a08dc9b333c2a314b358e3c3d4a79bcfaa" exitCode=0 Jan 29 11:23:35 crc kubenswrapper[4593]: I0129 11:23:35.203913 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"6f628dc297b127220882a1d8752d50a08dc9b333c2a314b358e3c3d4a79bcfaa"} Jan 29 11:23:35 crc kubenswrapper[4593]: I0129 11:23:35.205306 4593 scope.go:117] "RemoveContainer" containerID="000d590ca55db27781027868adeaf4e729be5f85280050b0a93300e017c70002" Jan 29 11:23:37 crc kubenswrapper[4593]: I0129 11:23:37.231836 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec"} Jan 29 11:24:11 crc kubenswrapper[4593]: I0129 11:24:11.777765 4593 scope.go:117] "RemoveContainer" containerID="b2737a73be5d76fb8f211f8bf7e6f7f5d5df136a1e001d613ced73be513cce7c" Jan 29 11:24:11 crc kubenswrapper[4593]: I0129 11:24:11.809338 4593 scope.go:117] "RemoveContainer" containerID="fca879370bdf54a12b3a105098148973a13eddb0bbbb835f4a9653bb9e65ca80" Jan 29 11:24:11 crc kubenswrapper[4593]: I0129 11:24:11.835179 4593 scope.go:117] "RemoveContainer" containerID="532ef2b08300e953556c4f80a0efbeeef65f13a2c78db2506158a85df92e08ac" Jan 29 11:24:11 crc kubenswrapper[4593]: I0129 11:24:11.862433 4593 scope.go:117] "RemoveContainer" containerID="6ca508da8e21ef8dd7d2c43f12f73a45b855f01c94f63172557349f3344fc6c9" Jan 29 11:24:19 crc kubenswrapper[4593]: I0129 11:24:19.941724 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4gj62"] Jan 29 11:24:19 crc kubenswrapper[4593]: I0129 11:24:19.944783 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:19 crc kubenswrapper[4593]: I0129 11:24:19.972756 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4gj62"] Jan 29 11:24:20 crc kubenswrapper[4593]: I0129 11:24:20.145945 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cff8d0c-7d4a-4327-9785-6ca7367e906f-catalog-content\") pod \"community-operators-4gj62\" (UID: \"7cff8d0c-7d4a-4327-9785-6ca7367e906f\") " pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:20 crc kubenswrapper[4593]: I0129 11:24:20.146308 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njgdh\" (UniqueName: \"kubernetes.io/projected/7cff8d0c-7d4a-4327-9785-6ca7367e906f-kube-api-access-njgdh\") pod \"community-operators-4gj62\" (UID: \"7cff8d0c-7d4a-4327-9785-6ca7367e906f\") " pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:20 crc kubenswrapper[4593]: I0129 11:24:20.146463 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cff8d0c-7d4a-4327-9785-6ca7367e906f-utilities\") pod \"community-operators-4gj62\" (UID: \"7cff8d0c-7d4a-4327-9785-6ca7367e906f\") " pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:20 crc kubenswrapper[4593]: I0129 11:24:20.248768 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cff8d0c-7d4a-4327-9785-6ca7367e906f-utilities\") pod \"community-operators-4gj62\" (UID: \"7cff8d0c-7d4a-4327-9785-6ca7367e906f\") " pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:20 crc kubenswrapper[4593]: I0129 11:24:20.249308 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cff8d0c-7d4a-4327-9785-6ca7367e906f-utilities\") pod \"community-operators-4gj62\" (UID: \"7cff8d0c-7d4a-4327-9785-6ca7367e906f\") " pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:20 crc kubenswrapper[4593]: I0129 11:24:20.249833 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cff8d0c-7d4a-4327-9785-6ca7367e906f-catalog-content\") pod \"community-operators-4gj62\" (UID: \"7cff8d0c-7d4a-4327-9785-6ca7367e906f\") " pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:20 crc kubenswrapper[4593]: I0129 11:24:20.250217 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njgdh\" (UniqueName: \"kubernetes.io/projected/7cff8d0c-7d4a-4327-9785-6ca7367e906f-kube-api-access-njgdh\") pod \"community-operators-4gj62\" (UID: \"7cff8d0c-7d4a-4327-9785-6ca7367e906f\") " pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:20 crc kubenswrapper[4593]: I0129 11:24:20.250223 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cff8d0c-7d4a-4327-9785-6ca7367e906f-catalog-content\") pod \"community-operators-4gj62\" (UID: \"7cff8d0c-7d4a-4327-9785-6ca7367e906f\") " pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:20 crc kubenswrapper[4593]: I0129 11:24:20.276429 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njgdh\" (UniqueName: \"kubernetes.io/projected/7cff8d0c-7d4a-4327-9785-6ca7367e906f-kube-api-access-njgdh\") pod \"community-operators-4gj62\" (UID: \"7cff8d0c-7d4a-4327-9785-6ca7367e906f\") " pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:20 crc kubenswrapper[4593]: I0129 11:24:20.570856 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:21 crc kubenswrapper[4593]: I0129 11:24:21.054337 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4gj62"] Jan 29 11:24:22 crc kubenswrapper[4593]: I0129 11:24:22.129682 4593 generic.go:334] "Generic (PLEG): container finished" podID="7cff8d0c-7d4a-4327-9785-6ca7367e906f" containerID="097d1deef03774835a1147ef52012071f282058acfc0fbb42b4f04f12e8033be" exitCode=0 Jan 29 11:24:22 crc kubenswrapper[4593]: I0129 11:24:22.129983 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gj62" event={"ID":"7cff8d0c-7d4a-4327-9785-6ca7367e906f","Type":"ContainerDied","Data":"097d1deef03774835a1147ef52012071f282058acfc0fbb42b4f04f12e8033be"} Jan 29 11:24:22 crc kubenswrapper[4593]: I0129 11:24:22.139660 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gj62" event={"ID":"7cff8d0c-7d4a-4327-9785-6ca7367e906f","Type":"ContainerStarted","Data":"0f223bb8ffe465ecc2b4d7adaa6dd0f8d56f5e4a5b1abbf62714c243ab708a1a"} Jan 29 11:24:22 crc kubenswrapper[4593]: I0129 11:24:22.133493 4593 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 11:24:24 crc kubenswrapper[4593]: I0129 11:24:24.263236 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gj62" event={"ID":"7cff8d0c-7d4a-4327-9785-6ca7367e906f","Type":"ContainerStarted","Data":"13513001d49c1795df7b400c21e30315f2a6c96e41c1f22c236f3f95800aafde"} Jan 29 11:24:27 crc kubenswrapper[4593]: I0129 11:24:27.597775 4593 generic.go:334] "Generic (PLEG): container finished" podID="7cff8d0c-7d4a-4327-9785-6ca7367e906f" containerID="13513001d49c1795df7b400c21e30315f2a6c96e41c1f22c236f3f95800aafde" exitCode=0 Jan 29 11:24:27 crc kubenswrapper[4593]: I0129 11:24:27.598184 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gj62" event={"ID":"7cff8d0c-7d4a-4327-9785-6ca7367e906f","Type":"ContainerDied","Data":"13513001d49c1795df7b400c21e30315f2a6c96e41c1f22c236f3f95800aafde"} Jan 29 11:24:28 crc kubenswrapper[4593]: I0129 11:24:28.610294 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gj62" event={"ID":"7cff8d0c-7d4a-4327-9785-6ca7367e906f","Type":"ContainerStarted","Data":"2c5780002c91c8e018fcb56c2a74b26b357b54216161e87ada00d04d653bfae5"} Jan 29 11:24:28 crc kubenswrapper[4593]: I0129 11:24:28.636071 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4gj62" podStartSLOduration=3.44295429 podStartE2EDuration="9.636036166s" podCreationTimestamp="2026-01-29 11:24:19 +0000 UTC" firstStartedPulling="2026-01-29 11:24:22.133255826 +0000 UTC m=+1528.006290017" lastFinishedPulling="2026-01-29 11:24:28.326337702 +0000 UTC m=+1534.199371893" observedRunningTime="2026-01-29 11:24:28.633468376 +0000 UTC m=+1534.506502567" watchObservedRunningTime="2026-01-29 11:24:28.636036166 +0000 UTC m=+1534.509070357" Jan 29 11:24:30 crc kubenswrapper[4593]: I0129 11:24:30.572661 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:30 crc kubenswrapper[4593]: I0129 11:24:30.572723 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:31 crc kubenswrapper[4593]: I0129 11:24:31.652715 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-4gj62" podUID="7cff8d0c-7d4a-4327-9785-6ca7367e906f" containerName="registry-server" probeResult="failure" output=< Jan 29 11:24:31 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:24:31 crc kubenswrapper[4593]: > Jan 29 11:24:41 crc kubenswrapper[4593]: I0129 11:24:41.626240 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-4gj62" podUID="7cff8d0c-7d4a-4327-9785-6ca7367e906f" containerName="registry-server" probeResult="failure" output=< Jan 29 11:24:41 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:24:41 crc kubenswrapper[4593]: > Jan 29 11:24:50 crc kubenswrapper[4593]: I0129 11:24:50.621002 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:50 crc kubenswrapper[4593]: I0129 11:24:50.672404 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:51 crc kubenswrapper[4593]: I0129 11:24:51.140103 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4gj62"] Jan 29 11:24:51 crc kubenswrapper[4593]: I0129 11:24:51.846538 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4gj62" podUID="7cff8d0c-7d4a-4327-9785-6ca7367e906f" containerName="registry-server" containerID="cri-o://2c5780002c91c8e018fcb56c2a74b26b357b54216161e87ada00d04d653bfae5" gracePeriod=2 Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.316856 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.356010 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cff8d0c-7d4a-4327-9785-6ca7367e906f-utilities\") pod \"7cff8d0c-7d4a-4327-9785-6ca7367e906f\" (UID: \"7cff8d0c-7d4a-4327-9785-6ca7367e906f\") " Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.356513 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njgdh\" (UniqueName: \"kubernetes.io/projected/7cff8d0c-7d4a-4327-9785-6ca7367e906f-kube-api-access-njgdh\") pod \"7cff8d0c-7d4a-4327-9785-6ca7367e906f\" (UID: \"7cff8d0c-7d4a-4327-9785-6ca7367e906f\") " Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.356758 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cff8d0c-7d4a-4327-9785-6ca7367e906f-catalog-content\") pod \"7cff8d0c-7d4a-4327-9785-6ca7367e906f\" (UID: \"7cff8d0c-7d4a-4327-9785-6ca7367e906f\") " Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.356893 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cff8d0c-7d4a-4327-9785-6ca7367e906f-utilities" (OuterVolumeSpecName: "utilities") pod "7cff8d0c-7d4a-4327-9785-6ca7367e906f" (UID: "7cff8d0c-7d4a-4327-9785-6ca7367e906f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.357567 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cff8d0c-7d4a-4327-9785-6ca7367e906f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.375590 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cff8d0c-7d4a-4327-9785-6ca7367e906f-kube-api-access-njgdh" (OuterVolumeSpecName: "kube-api-access-njgdh") pod "7cff8d0c-7d4a-4327-9785-6ca7367e906f" (UID: "7cff8d0c-7d4a-4327-9785-6ca7367e906f"). InnerVolumeSpecName "kube-api-access-njgdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.414789 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cff8d0c-7d4a-4327-9785-6ca7367e906f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7cff8d0c-7d4a-4327-9785-6ca7367e906f" (UID: "7cff8d0c-7d4a-4327-9785-6ca7367e906f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.460013 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cff8d0c-7d4a-4327-9785-6ca7367e906f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.460047 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njgdh\" (UniqueName: \"kubernetes.io/projected/7cff8d0c-7d4a-4327-9785-6ca7367e906f-kube-api-access-njgdh\") on node \"crc\" DevicePath \"\"" Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.856781 4593 generic.go:334] "Generic (PLEG): container finished" podID="7cff8d0c-7d4a-4327-9785-6ca7367e906f" containerID="2c5780002c91c8e018fcb56c2a74b26b357b54216161e87ada00d04d653bfae5" exitCode=0 Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.856845 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.856856 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gj62" event={"ID":"7cff8d0c-7d4a-4327-9785-6ca7367e906f","Type":"ContainerDied","Data":"2c5780002c91c8e018fcb56c2a74b26b357b54216161e87ada00d04d653bfae5"} Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.856955 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gj62" event={"ID":"7cff8d0c-7d4a-4327-9785-6ca7367e906f","Type":"ContainerDied","Data":"0f223bb8ffe465ecc2b4d7adaa6dd0f8d56f5e4a5b1abbf62714c243ab708a1a"} Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.856997 4593 scope.go:117] "RemoveContainer" containerID="2c5780002c91c8e018fcb56c2a74b26b357b54216161e87ada00d04d653bfae5" Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.887046 4593 scope.go:117] "RemoveContainer" containerID="13513001d49c1795df7b400c21e30315f2a6c96e41c1f22c236f3f95800aafde" Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.894833 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4gj62"] Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.930963 4593 scope.go:117] "RemoveContainer" containerID="097d1deef03774835a1147ef52012071f282058acfc0fbb42b4f04f12e8033be" Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.947011 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4gj62"] Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.982405 4593 scope.go:117] "RemoveContainer" containerID="2c5780002c91c8e018fcb56c2a74b26b357b54216161e87ada00d04d653bfae5" Jan 29 11:24:52 crc kubenswrapper[4593]: E0129 11:24:52.983112 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c5780002c91c8e018fcb56c2a74b26b357b54216161e87ada00d04d653bfae5\": container with ID starting with 2c5780002c91c8e018fcb56c2a74b26b357b54216161e87ada00d04d653bfae5 not found: ID does not exist" containerID="2c5780002c91c8e018fcb56c2a74b26b357b54216161e87ada00d04d653bfae5" Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.983233 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c5780002c91c8e018fcb56c2a74b26b357b54216161e87ada00d04d653bfae5"} err="failed to get container status \"2c5780002c91c8e018fcb56c2a74b26b357b54216161e87ada00d04d653bfae5\": rpc error: code = NotFound desc = could not find container \"2c5780002c91c8e018fcb56c2a74b26b357b54216161e87ada00d04d653bfae5\": container with ID starting with 2c5780002c91c8e018fcb56c2a74b26b357b54216161e87ada00d04d653bfae5 not found: ID does not exist" Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.983332 4593 scope.go:117] "RemoveContainer" containerID="13513001d49c1795df7b400c21e30315f2a6c96e41c1f22c236f3f95800aafde" Jan 29 11:24:52 crc kubenswrapper[4593]: E0129 11:24:52.983622 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13513001d49c1795df7b400c21e30315f2a6c96e41c1f22c236f3f95800aafde\": container with ID starting with 13513001d49c1795df7b400c21e30315f2a6c96e41c1f22c236f3f95800aafde not found: ID does not exist" containerID="13513001d49c1795df7b400c21e30315f2a6c96e41c1f22c236f3f95800aafde" Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.983721 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13513001d49c1795df7b400c21e30315f2a6c96e41c1f22c236f3f95800aafde"} err="failed to get container status \"13513001d49c1795df7b400c21e30315f2a6c96e41c1f22c236f3f95800aafde\": rpc error: code = NotFound desc = could not find container \"13513001d49c1795df7b400c21e30315f2a6c96e41c1f22c236f3f95800aafde\": container with ID starting with 13513001d49c1795df7b400c21e30315f2a6c96e41c1f22c236f3f95800aafde not found: ID does not exist" Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.983810 4593 scope.go:117] "RemoveContainer" containerID="097d1deef03774835a1147ef52012071f282058acfc0fbb42b4f04f12e8033be" Jan 29 11:24:52 crc kubenswrapper[4593]: E0129 11:24:52.984083 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"097d1deef03774835a1147ef52012071f282058acfc0fbb42b4f04f12e8033be\": container with ID starting with 097d1deef03774835a1147ef52012071f282058acfc0fbb42b4f04f12e8033be not found: ID does not exist" containerID="097d1deef03774835a1147ef52012071f282058acfc0fbb42b4f04f12e8033be" Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.984181 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"097d1deef03774835a1147ef52012071f282058acfc0fbb42b4f04f12e8033be"} err="failed to get container status \"097d1deef03774835a1147ef52012071f282058acfc0fbb42b4f04f12e8033be\": rpc error: code = NotFound desc = could not find container \"097d1deef03774835a1147ef52012071f282058acfc0fbb42b4f04f12e8033be\": container with ID starting with 097d1deef03774835a1147ef52012071f282058acfc0fbb42b4f04f12e8033be not found: ID does not exist" Jan 29 11:24:53 crc kubenswrapper[4593]: I0129 11:24:53.085739 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cff8d0c-7d4a-4327-9785-6ca7367e906f" path="/var/lib/kubelet/pods/7cff8d0c-7d4a-4327-9785-6ca7367e906f/volumes" Jan 29 11:25:11 crc kubenswrapper[4593]: I0129 11:25:11.955843 4593 scope.go:117] "RemoveContainer" containerID="87db22d6791489959e08e606893fce26ecb348d061df7a0b1bececa26e54b97e" Jan 29 11:25:12 crc kubenswrapper[4593]: I0129 11:25:12.013949 4593 scope.go:117] "RemoveContainer" containerID="b40c06d60848c18dde2f01bdab763148fbbd484c84e7f102df5e8efc825c8e5d" Jan 29 11:25:12 crc kubenswrapper[4593]: I0129 11:25:12.059528 4593 scope.go:117] "RemoveContainer" containerID="9946bfb35dcb9ca60e203e5220d24dee1ca137e4fc677bef2b4ce91126586731" Jan 29 11:25:12 crc kubenswrapper[4593]: I0129 11:25:12.089686 4593 scope.go:117] "RemoveContainer" containerID="c53181da51f450d9ff6f9c844dc483cdabc6bd935abb96bbb849906b8c60f8a1" Jan 29 11:25:16 crc kubenswrapper[4593]: I0129 11:25:16.086407 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-cjzzm"] Jan 29 11:25:16 crc kubenswrapper[4593]: I0129 11:25:16.111770 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c3a7-account-create-update-9b49r"] Jan 29 11:25:16 crc kubenswrapper[4593]: I0129 11:25:16.124813 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-70b0-account-create-update-c8qbm"] Jan 29 11:25:16 crc kubenswrapper[4593]: I0129 11:25:16.138873 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-c4fzt"] Jan 29 11:25:16 crc kubenswrapper[4593]: I0129 11:25:16.154044 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-cjzzm"] Jan 29 11:25:16 crc kubenswrapper[4593]: I0129 11:25:16.168729 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-c3a7-account-create-update-9b49r"] Jan 29 11:25:16 crc kubenswrapper[4593]: I0129 11:25:16.183781 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-c4fzt"] Jan 29 11:25:16 crc kubenswrapper[4593]: I0129 11:25:16.195970 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-70b0-account-create-update-c8qbm"] Jan 29 11:25:17 crc kubenswrapper[4593]: I0129 11:25:17.092793 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b4524da-e80b-4bd2-a116-061694417007" path="/var/lib/kubelet/pods/3b4524da-e80b-4bd2-a116-061694417007/volumes" Jan 29 11:25:17 crc kubenswrapper[4593]: I0129 11:25:17.095091 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2687b78-f425-4fae-9af8-7021f3e01e36" path="/var/lib/kubelet/pods/e2687b78-f425-4fae-9af8-7021f3e01e36/volumes" Jan 29 11:25:17 crc kubenswrapper[4593]: I0129 11:25:17.095978 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2eab48b-4545-4fa3-81f1-6247ebcf425e" path="/var/lib/kubelet/pods/f2eab48b-4545-4fa3-81f1-6247ebcf425e/volumes" Jan 29 11:25:17 crc kubenswrapper[4593]: I0129 11:25:17.096949 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdb1fb5b-1dc7-487a-b49d-d542eef7af31" path="/var/lib/kubelet/pods/fdb1fb5b-1dc7-487a-b49d-d542eef7af31/volumes" Jan 29 11:25:22 crc kubenswrapper[4593]: I0129 11:25:22.033211 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-pz4nl"] Jan 29 11:25:22 crc kubenswrapper[4593]: I0129 11:25:22.042795 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-pz4nl"] Jan 29 11:25:23 crc kubenswrapper[4593]: I0129 11:25:23.027891 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-b99c-account-create-update-49grn"] Jan 29 11:25:23 crc kubenswrapper[4593]: I0129 11:25:23.036714 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-b99c-account-create-update-49grn"] Jan 29 11:25:23 crc kubenswrapper[4593]: I0129 11:25:23.086301 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12899826-03ea-4b37-b523-74946fd54dee" path="/var/lib/kubelet/pods/12899826-03ea-4b37-b523-74946fd54dee/volumes" Jan 29 11:25:23 crc kubenswrapper[4593]: I0129 11:25:23.087240 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a84071c3-9564-41ef-b38f-fd40e1403fa8" path="/var/lib/kubelet/pods/a84071c3-9564-41ef-b38f-fd40e1403fa8/volumes" Jan 29 11:25:32 crc kubenswrapper[4593]: I0129 11:25:32.924156 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n85wt"] Jan 29 11:25:32 crc kubenswrapper[4593]: E0129 11:25:32.925327 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cff8d0c-7d4a-4327-9785-6ca7367e906f" containerName="registry-server" Jan 29 11:25:32 crc kubenswrapper[4593]: I0129 11:25:32.925359 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cff8d0c-7d4a-4327-9785-6ca7367e906f" containerName="registry-server" Jan 29 11:25:32 crc kubenswrapper[4593]: E0129 11:25:32.925378 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cff8d0c-7d4a-4327-9785-6ca7367e906f" containerName="extract-content" Jan 29 11:25:32 crc kubenswrapper[4593]: I0129 11:25:32.925386 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cff8d0c-7d4a-4327-9785-6ca7367e906f" containerName="extract-content" Jan 29 11:25:32 crc kubenswrapper[4593]: E0129 11:25:32.925406 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cff8d0c-7d4a-4327-9785-6ca7367e906f" containerName="extract-utilities" Jan 29 11:25:32 crc kubenswrapper[4593]: I0129 11:25:32.925415 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cff8d0c-7d4a-4327-9785-6ca7367e906f" containerName="extract-utilities" Jan 29 11:25:32 crc kubenswrapper[4593]: I0129 11:25:32.925721 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cff8d0c-7d4a-4327-9785-6ca7367e906f" containerName="registry-server" Jan 29 11:25:32 crc kubenswrapper[4593]: I0129 11:25:32.927544 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:32 crc kubenswrapper[4593]: I0129 11:25:32.942139 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n85wt"] Jan 29 11:25:33 crc kubenswrapper[4593]: I0129 11:25:33.008027 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ef266e-6732-412f-82a7-23482ba2dfe2-utilities\") pod \"redhat-marketplace-n85wt\" (UID: \"f5ef266e-6732-412f-82a7-23482ba2dfe2\") " pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:33 crc kubenswrapper[4593]: I0129 11:25:33.008142 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbjz5\" (UniqueName: \"kubernetes.io/projected/f5ef266e-6732-412f-82a7-23482ba2dfe2-kube-api-access-bbjz5\") pod \"redhat-marketplace-n85wt\" (UID: \"f5ef266e-6732-412f-82a7-23482ba2dfe2\") " pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:33 crc kubenswrapper[4593]: I0129 11:25:33.008232 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ef266e-6732-412f-82a7-23482ba2dfe2-catalog-content\") pod \"redhat-marketplace-n85wt\" (UID: \"f5ef266e-6732-412f-82a7-23482ba2dfe2\") " pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:33 crc kubenswrapper[4593]: I0129 11:25:33.110405 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ef266e-6732-412f-82a7-23482ba2dfe2-catalog-content\") pod \"redhat-marketplace-n85wt\" (UID: \"f5ef266e-6732-412f-82a7-23482ba2dfe2\") " pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:33 crc kubenswrapper[4593]: I0129 11:25:33.110563 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ef266e-6732-412f-82a7-23482ba2dfe2-utilities\") pod \"redhat-marketplace-n85wt\" (UID: \"f5ef266e-6732-412f-82a7-23482ba2dfe2\") " pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:33 crc kubenswrapper[4593]: I0129 11:25:33.110619 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbjz5\" (UniqueName: \"kubernetes.io/projected/f5ef266e-6732-412f-82a7-23482ba2dfe2-kube-api-access-bbjz5\") pod \"redhat-marketplace-n85wt\" (UID: \"f5ef266e-6732-412f-82a7-23482ba2dfe2\") " pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:33 crc kubenswrapper[4593]: I0129 11:25:33.111035 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ef266e-6732-412f-82a7-23482ba2dfe2-catalog-content\") pod \"redhat-marketplace-n85wt\" (UID: \"f5ef266e-6732-412f-82a7-23482ba2dfe2\") " pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:33 crc kubenswrapper[4593]: I0129 11:25:33.111829 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ef266e-6732-412f-82a7-23482ba2dfe2-utilities\") pod \"redhat-marketplace-n85wt\" (UID: \"f5ef266e-6732-412f-82a7-23482ba2dfe2\") " pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:33 crc kubenswrapper[4593]: I0129 11:25:33.131169 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbjz5\" (UniqueName: \"kubernetes.io/projected/f5ef266e-6732-412f-82a7-23482ba2dfe2-kube-api-access-bbjz5\") pod \"redhat-marketplace-n85wt\" (UID: \"f5ef266e-6732-412f-82a7-23482ba2dfe2\") " pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:33 crc kubenswrapper[4593]: I0129 11:25:33.254219 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:33 crc kubenswrapper[4593]: I0129 11:25:33.731932 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n85wt"] Jan 29 11:25:34 crc kubenswrapper[4593]: I0129 11:25:34.254355 4593 generic.go:334] "Generic (PLEG): container finished" podID="f5ef266e-6732-412f-82a7-23482ba2dfe2" containerID="b5319e51bb53037f41058c4b388b9111c7b6d25cb642a1e92f01aa92c10930f2" exitCode=0 Jan 29 11:25:34 crc kubenswrapper[4593]: I0129 11:25:34.254436 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n85wt" event={"ID":"f5ef266e-6732-412f-82a7-23482ba2dfe2","Type":"ContainerDied","Data":"b5319e51bb53037f41058c4b388b9111c7b6d25cb642a1e92f01aa92c10930f2"} Jan 29 11:25:34 crc kubenswrapper[4593]: I0129 11:25:34.254758 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n85wt" event={"ID":"f5ef266e-6732-412f-82a7-23482ba2dfe2","Type":"ContainerStarted","Data":"f2ddb1195350fe2e49e68f4403861bf9781674dc12a681b98af4ebb0c6014187"} Jan 29 11:25:36 crc kubenswrapper[4593]: I0129 11:25:36.272943 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n85wt" event={"ID":"f5ef266e-6732-412f-82a7-23482ba2dfe2","Type":"ContainerStarted","Data":"e99a6fceba0a5b98e17f5cc19308deeca9c2b4760edddc3d455131af64f66951"} Jan 29 11:25:38 crc kubenswrapper[4593]: I0129 11:25:38.292537 4593 generic.go:334] "Generic (PLEG): container finished" podID="f5ef266e-6732-412f-82a7-23482ba2dfe2" containerID="e99a6fceba0a5b98e17f5cc19308deeca9c2b4760edddc3d455131af64f66951" exitCode=0 Jan 29 11:25:38 crc kubenswrapper[4593]: I0129 11:25:38.292733 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n85wt" event={"ID":"f5ef266e-6732-412f-82a7-23482ba2dfe2","Type":"ContainerDied","Data":"e99a6fceba0a5b98e17f5cc19308deeca9c2b4760edddc3d455131af64f66951"} Jan 29 11:25:39 crc kubenswrapper[4593]: I0129 11:25:39.039553 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-625ls"] Jan 29 11:25:39 crc kubenswrapper[4593]: I0129 11:25:39.048450 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-625ls"] Jan 29 11:25:39 crc kubenswrapper[4593]: I0129 11:25:39.087770 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56d59502-9350-4842-bd01-35d55f0b47fa" path="/var/lib/kubelet/pods/56d59502-9350-4842-bd01-35d55f0b47fa/volumes" Jan 29 11:25:40 crc kubenswrapper[4593]: I0129 11:25:40.313072 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n85wt" event={"ID":"f5ef266e-6732-412f-82a7-23482ba2dfe2","Type":"ContainerStarted","Data":"60ef59731a01398455e8c4438702f5f2a2748cc8338763413d949581bafe6191"} Jan 29 11:25:43 crc kubenswrapper[4593]: I0129 11:25:43.032503 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n85wt" podStartSLOduration=5.756956356 podStartE2EDuration="11.032469614s" podCreationTimestamp="2026-01-29 11:25:32 +0000 UTC" firstStartedPulling="2026-01-29 11:25:34.256994079 +0000 UTC m=+1600.130028270" lastFinishedPulling="2026-01-29 11:25:39.532507337 +0000 UTC m=+1605.405541528" observedRunningTime="2026-01-29 11:25:40.334385092 +0000 UTC m=+1606.207419283" watchObservedRunningTime="2026-01-29 11:25:43.032469614 +0000 UTC m=+1608.905503805" Jan 29 11:25:43 crc kubenswrapper[4593]: I0129 11:25:43.038899 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-vdz52"] Jan 29 11:25:43 crc kubenswrapper[4593]: I0129 11:25:43.045895 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-vdz52"] Jan 29 11:25:43 crc kubenswrapper[4593]: I0129 11:25:43.086283 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52b59817-1d9d-431d-8055-cf98107b89a2" path="/var/lib/kubelet/pods/52b59817-1d9d-431d-8055-cf98107b89a2/volumes" Jan 29 11:25:43 crc kubenswrapper[4593]: I0129 11:25:43.254993 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:43 crc kubenswrapper[4593]: I0129 11:25:43.255045 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:43 crc kubenswrapper[4593]: I0129 11:25:43.307673 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:44 crc kubenswrapper[4593]: I0129 11:25:44.045312 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-140c-account-create-update-csqgp"] Jan 29 11:25:44 crc kubenswrapper[4593]: I0129 11:25:44.057835 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-4c8a-account-create-update-psrpm"] Jan 29 11:25:44 crc kubenswrapper[4593]: I0129 11:25:44.069006 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-4c8a-account-create-update-psrpm"] Jan 29 11:25:44 crc kubenswrapper[4593]: I0129 11:25:44.079748 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-jgv94"] Jan 29 11:25:44 crc kubenswrapper[4593]: I0129 11:25:44.090385 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-9hskn"] Jan 29 11:25:44 crc kubenswrapper[4593]: I0129 11:25:44.098540 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-140c-account-create-update-csqgp"] Jan 29 11:25:44 crc kubenswrapper[4593]: I0129 11:25:44.107193 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-0486-account-create-update-f9r68"] Jan 29 11:25:44 crc kubenswrapper[4593]: I0129 11:25:44.115393 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-9hskn"] Jan 29 11:25:44 crc kubenswrapper[4593]: I0129 11:25:44.124275 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-jgv94"] Jan 29 11:25:44 crc kubenswrapper[4593]: I0129 11:25:44.132462 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-0486-account-create-update-f9r68"] Jan 29 11:25:45 crc kubenswrapper[4593]: I0129 11:25:45.096820 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="115d89c5-8038-4b55-9f1d-d0f169ee0b53" path="/var/lib/kubelet/pods/115d89c5-8038-4b55-9f1d-d0f169ee0b53/volumes" Jan 29 11:25:45 crc kubenswrapper[4593]: I0129 11:25:45.098004 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ef7a572-9631-4078-a6ed-419d2a4dfdf9" path="/var/lib/kubelet/pods/1ef7a572-9631-4078-a6ed-419d2a4dfdf9/volumes" Jan 29 11:25:45 crc kubenswrapper[4593]: I0129 11:25:45.099259 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d46f220-cb33-4768-91f5-c59e98c41af4" path="/var/lib/kubelet/pods/6d46f220-cb33-4768-91f5-c59e98c41af4/volumes" Jan 29 11:25:45 crc kubenswrapper[4593]: I0129 11:25:45.100177 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe" path="/var/lib/kubelet/pods/7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe/volumes" Jan 29 11:25:45 crc kubenswrapper[4593]: I0129 11:25:45.101936 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbee97db-a8f1-43e0-ac0b-ec58529b2c03" path="/var/lib/kubelet/pods/fbee97db-a8f1-43e0-ac0b-ec58529b2c03/volumes" Jan 29 11:25:53 crc kubenswrapper[4593]: I0129 11:25:53.304556 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:53 crc kubenswrapper[4593]: I0129 11:25:53.363897 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n85wt"] Jan 29 11:25:53 crc kubenswrapper[4593]: I0129 11:25:53.435034 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n85wt" podUID="f5ef266e-6732-412f-82a7-23482ba2dfe2" containerName="registry-server" containerID="cri-o://60ef59731a01398455e8c4438702f5f2a2748cc8338763413d949581bafe6191" gracePeriod=2 Jan 29 11:25:53 crc kubenswrapper[4593]: I0129 11:25:53.959525 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.073750 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-wzm6z"] Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.084575 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-wzm6z"] Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.121521 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbjz5\" (UniqueName: \"kubernetes.io/projected/f5ef266e-6732-412f-82a7-23482ba2dfe2-kube-api-access-bbjz5\") pod \"f5ef266e-6732-412f-82a7-23482ba2dfe2\" (UID: \"f5ef266e-6732-412f-82a7-23482ba2dfe2\") " Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.121993 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ef266e-6732-412f-82a7-23482ba2dfe2-utilities\") pod \"f5ef266e-6732-412f-82a7-23482ba2dfe2\" (UID: \"f5ef266e-6732-412f-82a7-23482ba2dfe2\") " Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.122250 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ef266e-6732-412f-82a7-23482ba2dfe2-catalog-content\") pod \"f5ef266e-6732-412f-82a7-23482ba2dfe2\" (UID: \"f5ef266e-6732-412f-82a7-23482ba2dfe2\") " Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.123094 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5ef266e-6732-412f-82a7-23482ba2dfe2-utilities" (OuterVolumeSpecName: "utilities") pod "f5ef266e-6732-412f-82a7-23482ba2dfe2" (UID: "f5ef266e-6732-412f-82a7-23482ba2dfe2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.133515 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5ef266e-6732-412f-82a7-23482ba2dfe2-kube-api-access-bbjz5" (OuterVolumeSpecName: "kube-api-access-bbjz5") pod "f5ef266e-6732-412f-82a7-23482ba2dfe2" (UID: "f5ef266e-6732-412f-82a7-23482ba2dfe2"). InnerVolumeSpecName "kube-api-access-bbjz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.155028 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5ef266e-6732-412f-82a7-23482ba2dfe2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5ef266e-6732-412f-82a7-23482ba2dfe2" (UID: "f5ef266e-6732-412f-82a7-23482ba2dfe2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.225221 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ef266e-6732-412f-82a7-23482ba2dfe2-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.225266 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ef266e-6732-412f-82a7-23482ba2dfe2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.225282 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbjz5\" (UniqueName: \"kubernetes.io/projected/f5ef266e-6732-412f-82a7-23482ba2dfe2-kube-api-access-bbjz5\") on node \"crc\" DevicePath \"\"" Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.451894 4593 generic.go:334] "Generic (PLEG): container finished" podID="f5ef266e-6732-412f-82a7-23482ba2dfe2" containerID="60ef59731a01398455e8c4438702f5f2a2748cc8338763413d949581bafe6191" exitCode=0 Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.451940 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n85wt" event={"ID":"f5ef266e-6732-412f-82a7-23482ba2dfe2","Type":"ContainerDied","Data":"60ef59731a01398455e8c4438702f5f2a2748cc8338763413d949581bafe6191"} Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.451966 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n85wt" event={"ID":"f5ef266e-6732-412f-82a7-23482ba2dfe2","Type":"ContainerDied","Data":"f2ddb1195350fe2e49e68f4403861bf9781674dc12a681b98af4ebb0c6014187"} Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.451984 4593 scope.go:117] "RemoveContainer" containerID="60ef59731a01398455e8c4438702f5f2a2748cc8338763413d949581bafe6191" Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.452132 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.483997 4593 scope.go:117] "RemoveContainer" containerID="e99a6fceba0a5b98e17f5cc19308deeca9c2b4760edddc3d455131af64f66951" Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.510842 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n85wt"] Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.515156 4593 scope.go:117] "RemoveContainer" containerID="b5319e51bb53037f41058c4b388b9111c7b6d25cb642a1e92f01aa92c10930f2" Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.523777 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n85wt"] Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.613091 4593 scope.go:117] "RemoveContainer" containerID="60ef59731a01398455e8c4438702f5f2a2748cc8338763413d949581bafe6191" Jan 29 11:25:54 crc kubenswrapper[4593]: E0129 11:25:54.613747 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60ef59731a01398455e8c4438702f5f2a2748cc8338763413d949581bafe6191\": container with ID starting with 60ef59731a01398455e8c4438702f5f2a2748cc8338763413d949581bafe6191 not found: ID does not exist" containerID="60ef59731a01398455e8c4438702f5f2a2748cc8338763413d949581bafe6191" Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.613883 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60ef59731a01398455e8c4438702f5f2a2748cc8338763413d949581bafe6191"} err="failed to get container status \"60ef59731a01398455e8c4438702f5f2a2748cc8338763413d949581bafe6191\": rpc error: code = NotFound desc = could not find container \"60ef59731a01398455e8c4438702f5f2a2748cc8338763413d949581bafe6191\": container with ID starting with 60ef59731a01398455e8c4438702f5f2a2748cc8338763413d949581bafe6191 not found: ID does not exist" Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.613975 4593 scope.go:117] "RemoveContainer" containerID="e99a6fceba0a5b98e17f5cc19308deeca9c2b4760edddc3d455131af64f66951" Jan 29 11:25:54 crc kubenswrapper[4593]: E0129 11:25:54.615087 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e99a6fceba0a5b98e17f5cc19308deeca9c2b4760edddc3d455131af64f66951\": container with ID starting with e99a6fceba0a5b98e17f5cc19308deeca9c2b4760edddc3d455131af64f66951 not found: ID does not exist" containerID="e99a6fceba0a5b98e17f5cc19308deeca9c2b4760edddc3d455131af64f66951" Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.615151 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e99a6fceba0a5b98e17f5cc19308deeca9c2b4760edddc3d455131af64f66951"} err="failed to get container status \"e99a6fceba0a5b98e17f5cc19308deeca9c2b4760edddc3d455131af64f66951\": rpc error: code = NotFound desc = could not find container \"e99a6fceba0a5b98e17f5cc19308deeca9c2b4760edddc3d455131af64f66951\": container with ID starting with e99a6fceba0a5b98e17f5cc19308deeca9c2b4760edddc3d455131af64f66951 not found: ID does not exist" Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.615181 4593 scope.go:117] "RemoveContainer" containerID="b5319e51bb53037f41058c4b388b9111c7b6d25cb642a1e92f01aa92c10930f2" Jan 29 11:25:54 crc kubenswrapper[4593]: E0129 11:25:54.617000 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5319e51bb53037f41058c4b388b9111c7b6d25cb642a1e92f01aa92c10930f2\": container with ID starting with b5319e51bb53037f41058c4b388b9111c7b6d25cb642a1e92f01aa92c10930f2 not found: ID does not exist" containerID="b5319e51bb53037f41058c4b388b9111c7b6d25cb642a1e92f01aa92c10930f2" Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.617034 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5319e51bb53037f41058c4b388b9111c7b6d25cb642a1e92f01aa92c10930f2"} err="failed to get container status \"b5319e51bb53037f41058c4b388b9111c7b6d25cb642a1e92f01aa92c10930f2\": rpc error: code = NotFound desc = could not find container \"b5319e51bb53037f41058c4b388b9111c7b6d25cb642a1e92f01aa92c10930f2\": container with ID starting with b5319e51bb53037f41058c4b388b9111c7b6d25cb642a1e92f01aa92c10930f2 not found: ID does not exist" Jan 29 11:25:55 crc kubenswrapper[4593]: I0129 11:25:55.093882 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c0b4a25-540c-47dd-96fb-fdc6872721b5" path="/var/lib/kubelet/pods/9c0b4a25-540c-47dd-96fb-fdc6872721b5/volumes" Jan 29 11:25:55 crc kubenswrapper[4593]: I0129 11:25:55.095103 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5ef266e-6732-412f-82a7-23482ba2dfe2" path="/var/lib/kubelet/pods/f5ef266e-6732-412f-82a7-23482ba2dfe2/volumes" Jan 29 11:26:03 crc kubenswrapper[4593]: I0129 11:26:03.946183 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:26:03 crc kubenswrapper[4593]: I0129 11:26:03.946756 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:26:12 crc kubenswrapper[4593]: I0129 11:26:12.198142 4593 scope.go:117] "RemoveContainer" containerID="18ec4b46dd2b143a4699e4f0f9fb21bf0908d4fea6194256ca5d46a4b1e3154b" Jan 29 11:26:12 crc kubenswrapper[4593]: I0129 11:26:12.261985 4593 scope.go:117] "RemoveContainer" containerID="8daab26085422d8b821fec9dd8845576bd1f7996b7bd02a206e4ec1ed954891a" Jan 29 11:26:12 crc kubenswrapper[4593]: I0129 11:26:12.292755 4593 scope.go:117] "RemoveContainer" containerID="cfeb01d9eafd6f66b4b9db53f4dc0ef8f8de91ea87a6bf0dc6e1a2b4cfb6bce8" Jan 29 11:26:12 crc kubenswrapper[4593]: I0129 11:26:12.338490 4593 scope.go:117] "RemoveContainer" containerID="43d82ed1472c3625ce9296a41e8408518af652ca97d81bd779f6e88331c78c4e" Jan 29 11:26:12 crc kubenswrapper[4593]: I0129 11:26:12.388215 4593 scope.go:117] "RemoveContainer" containerID="2e1d0fad53de84474f89284c6a88dc3a72dfb695af32b237f2378dd7177ae8c5" Jan 29 11:26:12 crc kubenswrapper[4593]: I0129 11:26:12.441922 4593 scope.go:117] "RemoveContainer" containerID="b2686e149913ab0d7eb8e1c1ab82711e8bc8d0f1e7c674ad1bb843e01690c119" Jan 29 11:26:12 crc kubenswrapper[4593]: I0129 11:26:12.483045 4593 scope.go:117] "RemoveContainer" containerID="d302776b71ae9de08283f287bc6180cc80cb27e0867558e7d6ef7199f716f657" Jan 29 11:26:12 crc kubenswrapper[4593]: I0129 11:26:12.513846 4593 scope.go:117] "RemoveContainer" containerID="f4b832d6a02cddde771b6eeb4da2b7e8c024cb3a623b350dff1e411d17b9ecfd" Jan 29 11:26:12 crc kubenswrapper[4593]: I0129 11:26:12.541930 4593 scope.go:117] "RemoveContainer" containerID="26e9d793caead0da7c6fbe2d2cc88998f753f02199ec672516904069fc61c2fc" Jan 29 11:26:12 crc kubenswrapper[4593]: I0129 11:26:12.560730 4593 scope.go:117] "RemoveContainer" containerID="db6e520018218e0ecd1d4a8d69f63a0e96eea393f5e0abbccf345503319fb4c2" Jan 29 11:26:12 crc kubenswrapper[4593]: I0129 11:26:12.592221 4593 scope.go:117] "RemoveContainer" containerID="b2e16a35b6612eefbbea849496217b01c0c3973f0a5bc7ad6ae362ff548b8cf0" Jan 29 11:26:12 crc kubenswrapper[4593]: I0129 11:26:12.673410 4593 scope.go:117] "RemoveContainer" containerID="9d37cf9a7f03d5742ea9e7314623a8e8f189e15526f469c97b71739526cfc70b" Jan 29 11:26:12 crc kubenswrapper[4593]: I0129 11:26:12.714401 4593 scope.go:117] "RemoveContainer" containerID="c00b7731a137cc5e16b524de8c2c6a1402d07e79205488315ad3920c71b523b5" Jan 29 11:26:12 crc kubenswrapper[4593]: I0129 11:26:12.753799 4593 scope.go:117] "RemoveContainer" containerID="1146c75a258cb4ad7f71cc2e37d3a74813526e1b88d59d1880e58f1ae91dd7d1" Jan 29 11:26:33 crc kubenswrapper[4593]: I0129 11:26:33.946961 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:26:33 crc kubenswrapper[4593]: I0129 11:26:33.947481 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:26:46 crc kubenswrapper[4593]: I0129 11:26:46.762974 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jqjbm"] Jan 29 11:26:46 crc kubenswrapper[4593]: E0129 11:26:46.763939 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ef266e-6732-412f-82a7-23482ba2dfe2" containerName="extract-utilities" Jan 29 11:26:46 crc kubenswrapper[4593]: I0129 11:26:46.763954 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ef266e-6732-412f-82a7-23482ba2dfe2" containerName="extract-utilities" Jan 29 11:26:46 crc kubenswrapper[4593]: E0129 11:26:46.763978 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ef266e-6732-412f-82a7-23482ba2dfe2" containerName="extract-content" Jan 29 11:26:46 crc kubenswrapper[4593]: I0129 11:26:46.763984 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ef266e-6732-412f-82a7-23482ba2dfe2" containerName="extract-content" Jan 29 11:26:46 crc kubenswrapper[4593]: E0129 11:26:46.763994 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ef266e-6732-412f-82a7-23482ba2dfe2" containerName="registry-server" Jan 29 11:26:46 crc kubenswrapper[4593]: I0129 11:26:46.764000 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ef266e-6732-412f-82a7-23482ba2dfe2" containerName="registry-server" Jan 29 11:26:46 crc kubenswrapper[4593]: I0129 11:26:46.764357 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5ef266e-6732-412f-82a7-23482ba2dfe2" containerName="registry-server" Jan 29 11:26:46 crc kubenswrapper[4593]: I0129 11:26:46.765729 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:26:46 crc kubenswrapper[4593]: I0129 11:26:46.801688 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jqjbm"] Jan 29 11:26:46 crc kubenswrapper[4593]: I0129 11:26:46.912909 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86e2d453-9800-4924-84df-86f0f43e5d99-utilities\") pod \"certified-operators-jqjbm\" (UID: \"86e2d453-9800-4924-84df-86f0f43e5d99\") " pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:26:46 crc kubenswrapper[4593]: I0129 11:26:46.912988 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvng9\" (UniqueName: \"kubernetes.io/projected/86e2d453-9800-4924-84df-86f0f43e5d99-kube-api-access-xvng9\") pod \"certified-operators-jqjbm\" (UID: \"86e2d453-9800-4924-84df-86f0f43e5d99\") " pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:26:46 crc kubenswrapper[4593]: I0129 11:26:46.913231 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86e2d453-9800-4924-84df-86f0f43e5d99-catalog-content\") pod \"certified-operators-jqjbm\" (UID: \"86e2d453-9800-4924-84df-86f0f43e5d99\") " pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:26:47 crc kubenswrapper[4593]: I0129 11:26:47.015414 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86e2d453-9800-4924-84df-86f0f43e5d99-utilities\") pod \"certified-operators-jqjbm\" (UID: \"86e2d453-9800-4924-84df-86f0f43e5d99\") " pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:26:47 crc kubenswrapper[4593]: I0129 11:26:47.015462 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvng9\" (UniqueName: \"kubernetes.io/projected/86e2d453-9800-4924-84df-86f0f43e5d99-kube-api-access-xvng9\") pod \"certified-operators-jqjbm\" (UID: \"86e2d453-9800-4924-84df-86f0f43e5d99\") " pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:26:47 crc kubenswrapper[4593]: I0129 11:26:47.015502 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86e2d453-9800-4924-84df-86f0f43e5d99-catalog-content\") pod \"certified-operators-jqjbm\" (UID: \"86e2d453-9800-4924-84df-86f0f43e5d99\") " pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:26:47 crc kubenswrapper[4593]: I0129 11:26:47.015960 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86e2d453-9800-4924-84df-86f0f43e5d99-catalog-content\") pod \"certified-operators-jqjbm\" (UID: \"86e2d453-9800-4924-84df-86f0f43e5d99\") " pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:26:47 crc kubenswrapper[4593]: I0129 11:26:47.018928 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86e2d453-9800-4924-84df-86f0f43e5d99-utilities\") pod \"certified-operators-jqjbm\" (UID: \"86e2d453-9800-4924-84df-86f0f43e5d99\") " pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:26:47 crc kubenswrapper[4593]: I0129 11:26:47.038224 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvng9\" (UniqueName: \"kubernetes.io/projected/86e2d453-9800-4924-84df-86f0f43e5d99-kube-api-access-xvng9\") pod \"certified-operators-jqjbm\" (UID: \"86e2d453-9800-4924-84df-86f0f43e5d99\") " pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:26:47 crc kubenswrapper[4593]: I0129 11:26:47.098112 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:26:47 crc kubenswrapper[4593]: I0129 11:26:47.589785 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jqjbm"] Jan 29 11:26:48 crc kubenswrapper[4593]: I0129 11:26:48.032665 4593 generic.go:334] "Generic (PLEG): container finished" podID="86e2d453-9800-4924-84df-86f0f43e5d99" containerID="eb460d38ed530ffd615538bb7baa7581b00e46748a91a7c9f2eff3d9ab864da7" exitCode=0 Jan 29 11:26:48 crc kubenswrapper[4593]: I0129 11:26:48.032861 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqjbm" event={"ID":"86e2d453-9800-4924-84df-86f0f43e5d99","Type":"ContainerDied","Data":"eb460d38ed530ffd615538bb7baa7581b00e46748a91a7c9f2eff3d9ab864da7"} Jan 29 11:26:48 crc kubenswrapper[4593]: I0129 11:26:48.033015 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqjbm" event={"ID":"86e2d453-9800-4924-84df-86f0f43e5d99","Type":"ContainerStarted","Data":"67966c1309c45a48c63afccd47f924ae485ed1b5ff7fd66be898dc112116f944"} Jan 29 11:26:49 crc kubenswrapper[4593]: I0129 11:26:49.046574 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqjbm" event={"ID":"86e2d453-9800-4924-84df-86f0f43e5d99","Type":"ContainerStarted","Data":"f6ce646fc478ffd2b851c3bfb90c157d20f4c1b31c6eb71cef5ff6556bb895bc"} Jan 29 11:26:52 crc kubenswrapper[4593]: I0129 11:26:52.077439 4593 generic.go:334] "Generic (PLEG): container finished" podID="86e2d453-9800-4924-84df-86f0f43e5d99" containerID="f6ce646fc478ffd2b851c3bfb90c157d20f4c1b31c6eb71cef5ff6556bb895bc" exitCode=0 Jan 29 11:26:52 crc kubenswrapper[4593]: I0129 11:26:52.077523 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqjbm" event={"ID":"86e2d453-9800-4924-84df-86f0f43e5d99","Type":"ContainerDied","Data":"f6ce646fc478ffd2b851c3bfb90c157d20f4c1b31c6eb71cef5ff6556bb895bc"} Jan 29 11:26:53 crc kubenswrapper[4593]: I0129 11:26:53.090986 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqjbm" event={"ID":"86e2d453-9800-4924-84df-86f0f43e5d99","Type":"ContainerStarted","Data":"c6afa4b7206057f1b10675f27b3095b4028a9fb8351a45cfeeda0413104ef378"} Jan 29 11:26:57 crc kubenswrapper[4593]: I0129 11:26:57.098710 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:26:57 crc kubenswrapper[4593]: I0129 11:26:57.099971 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:26:57 crc kubenswrapper[4593]: I0129 11:26:57.154706 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:26:57 crc kubenswrapper[4593]: I0129 11:26:57.188262 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jqjbm" podStartSLOduration=6.486181662 podStartE2EDuration="11.188238588s" podCreationTimestamp="2026-01-29 11:26:46 +0000 UTC" firstStartedPulling="2026-01-29 11:26:48.034892663 +0000 UTC m=+1673.907926854" lastFinishedPulling="2026-01-29 11:26:52.736949589 +0000 UTC m=+1678.609983780" observedRunningTime="2026-01-29 11:26:53.122773003 +0000 UTC m=+1678.995807204" watchObservedRunningTime="2026-01-29 11:26:57.188238588 +0000 UTC m=+1683.061272779" Jan 29 11:26:57 crc kubenswrapper[4593]: I0129 11:26:57.211887 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:26:57 crc kubenswrapper[4593]: I0129 11:26:57.399650 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jqjbm"] Jan 29 11:26:59 crc kubenswrapper[4593]: I0129 11:26:59.173409 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jqjbm" podUID="86e2d453-9800-4924-84df-86f0f43e5d99" containerName="registry-server" containerID="cri-o://c6afa4b7206057f1b10675f27b3095b4028a9fb8351a45cfeeda0413104ef378" gracePeriod=2 Jan 29 11:26:59 crc kubenswrapper[4593]: I0129 11:26:59.646702 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:26:59 crc kubenswrapper[4593]: I0129 11:26:59.784571 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86e2d453-9800-4924-84df-86f0f43e5d99-catalog-content\") pod \"86e2d453-9800-4924-84df-86f0f43e5d99\" (UID: \"86e2d453-9800-4924-84df-86f0f43e5d99\") " Jan 29 11:26:59 crc kubenswrapper[4593]: I0129 11:26:59.784770 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvng9\" (UniqueName: \"kubernetes.io/projected/86e2d453-9800-4924-84df-86f0f43e5d99-kube-api-access-xvng9\") pod \"86e2d453-9800-4924-84df-86f0f43e5d99\" (UID: \"86e2d453-9800-4924-84df-86f0f43e5d99\") " Jan 29 11:26:59 crc kubenswrapper[4593]: I0129 11:26:59.784903 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86e2d453-9800-4924-84df-86f0f43e5d99-utilities\") pod \"86e2d453-9800-4924-84df-86f0f43e5d99\" (UID: \"86e2d453-9800-4924-84df-86f0f43e5d99\") " Jan 29 11:26:59 crc kubenswrapper[4593]: I0129 11:26:59.786425 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86e2d453-9800-4924-84df-86f0f43e5d99-utilities" (OuterVolumeSpecName: "utilities") pod "86e2d453-9800-4924-84df-86f0f43e5d99" (UID: "86e2d453-9800-4924-84df-86f0f43e5d99"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:26:59 crc kubenswrapper[4593]: I0129 11:26:59.793021 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86e2d453-9800-4924-84df-86f0f43e5d99-kube-api-access-xvng9" (OuterVolumeSpecName: "kube-api-access-xvng9") pod "86e2d453-9800-4924-84df-86f0f43e5d99" (UID: "86e2d453-9800-4924-84df-86f0f43e5d99"). InnerVolumeSpecName "kube-api-access-xvng9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:26:59 crc kubenswrapper[4593]: I0129 11:26:59.846500 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86e2d453-9800-4924-84df-86f0f43e5d99-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86e2d453-9800-4924-84df-86f0f43e5d99" (UID: "86e2d453-9800-4924-84df-86f0f43e5d99"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:26:59 crc kubenswrapper[4593]: I0129 11:26:59.886791 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvng9\" (UniqueName: \"kubernetes.io/projected/86e2d453-9800-4924-84df-86f0f43e5d99-kube-api-access-xvng9\") on node \"crc\" DevicePath \"\"" Jan 29 11:26:59 crc kubenswrapper[4593]: I0129 11:26:59.886828 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86e2d453-9800-4924-84df-86f0f43e5d99-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:26:59 crc kubenswrapper[4593]: I0129 11:26:59.886839 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86e2d453-9800-4924-84df-86f0f43e5d99-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:27:00 crc kubenswrapper[4593]: I0129 11:27:00.182572 4593 generic.go:334] "Generic (PLEG): container finished" podID="86e2d453-9800-4924-84df-86f0f43e5d99" containerID="c6afa4b7206057f1b10675f27b3095b4028a9fb8351a45cfeeda0413104ef378" exitCode=0 Jan 29 11:27:00 crc kubenswrapper[4593]: I0129 11:27:00.182617 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqjbm" event={"ID":"86e2d453-9800-4924-84df-86f0f43e5d99","Type":"ContainerDied","Data":"c6afa4b7206057f1b10675f27b3095b4028a9fb8351a45cfeeda0413104ef378"} Jan 29 11:27:00 crc kubenswrapper[4593]: I0129 11:27:00.182652 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:27:00 crc kubenswrapper[4593]: I0129 11:27:00.182671 4593 scope.go:117] "RemoveContainer" containerID="c6afa4b7206057f1b10675f27b3095b4028a9fb8351a45cfeeda0413104ef378" Jan 29 11:27:00 crc kubenswrapper[4593]: I0129 11:27:00.182660 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqjbm" event={"ID":"86e2d453-9800-4924-84df-86f0f43e5d99","Type":"ContainerDied","Data":"67966c1309c45a48c63afccd47f924ae485ed1b5ff7fd66be898dc112116f944"} Jan 29 11:27:00 crc kubenswrapper[4593]: I0129 11:27:00.210732 4593 scope.go:117] "RemoveContainer" containerID="f6ce646fc478ffd2b851c3bfb90c157d20f4c1b31c6eb71cef5ff6556bb895bc" Jan 29 11:27:00 crc kubenswrapper[4593]: I0129 11:27:00.238410 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jqjbm"] Jan 29 11:27:00 crc kubenswrapper[4593]: I0129 11:27:00.272837 4593 scope.go:117] "RemoveContainer" containerID="eb460d38ed530ffd615538bb7baa7581b00e46748a91a7c9f2eff3d9ab864da7" Jan 29 11:27:00 crc kubenswrapper[4593]: I0129 11:27:00.275982 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jqjbm"] Jan 29 11:27:00 crc kubenswrapper[4593]: I0129 11:27:00.336717 4593 scope.go:117] "RemoveContainer" containerID="c6afa4b7206057f1b10675f27b3095b4028a9fb8351a45cfeeda0413104ef378" Jan 29 11:27:00 crc kubenswrapper[4593]: E0129 11:27:00.340827 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6afa4b7206057f1b10675f27b3095b4028a9fb8351a45cfeeda0413104ef378\": container with ID starting with c6afa4b7206057f1b10675f27b3095b4028a9fb8351a45cfeeda0413104ef378 not found: ID does not exist" containerID="c6afa4b7206057f1b10675f27b3095b4028a9fb8351a45cfeeda0413104ef378" Jan 29 11:27:00 crc kubenswrapper[4593]: I0129 11:27:00.340887 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6afa4b7206057f1b10675f27b3095b4028a9fb8351a45cfeeda0413104ef378"} err="failed to get container status \"c6afa4b7206057f1b10675f27b3095b4028a9fb8351a45cfeeda0413104ef378\": rpc error: code = NotFound desc = could not find container \"c6afa4b7206057f1b10675f27b3095b4028a9fb8351a45cfeeda0413104ef378\": container with ID starting with c6afa4b7206057f1b10675f27b3095b4028a9fb8351a45cfeeda0413104ef378 not found: ID does not exist" Jan 29 11:27:00 crc kubenswrapper[4593]: I0129 11:27:00.340924 4593 scope.go:117] "RemoveContainer" containerID="f6ce646fc478ffd2b851c3bfb90c157d20f4c1b31c6eb71cef5ff6556bb895bc" Jan 29 11:27:00 crc kubenswrapper[4593]: E0129 11:27:00.341620 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6ce646fc478ffd2b851c3bfb90c157d20f4c1b31c6eb71cef5ff6556bb895bc\": container with ID starting with f6ce646fc478ffd2b851c3bfb90c157d20f4c1b31c6eb71cef5ff6556bb895bc not found: ID does not exist" containerID="f6ce646fc478ffd2b851c3bfb90c157d20f4c1b31c6eb71cef5ff6556bb895bc" Jan 29 11:27:00 crc kubenswrapper[4593]: I0129 11:27:00.341666 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6ce646fc478ffd2b851c3bfb90c157d20f4c1b31c6eb71cef5ff6556bb895bc"} err="failed to get container status \"f6ce646fc478ffd2b851c3bfb90c157d20f4c1b31c6eb71cef5ff6556bb895bc\": rpc error: code = NotFound desc = could not find container \"f6ce646fc478ffd2b851c3bfb90c157d20f4c1b31c6eb71cef5ff6556bb895bc\": container with ID starting with f6ce646fc478ffd2b851c3bfb90c157d20f4c1b31c6eb71cef5ff6556bb895bc not found: ID does not exist" Jan 29 11:27:00 crc kubenswrapper[4593]: I0129 11:27:00.341686 4593 scope.go:117] "RemoveContainer" containerID="eb460d38ed530ffd615538bb7baa7581b00e46748a91a7c9f2eff3d9ab864da7" Jan 29 11:27:00 crc kubenswrapper[4593]: E0129 11:27:00.342338 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb460d38ed530ffd615538bb7baa7581b00e46748a91a7c9f2eff3d9ab864da7\": container with ID starting with eb460d38ed530ffd615538bb7baa7581b00e46748a91a7c9f2eff3d9ab864da7 not found: ID does not exist" containerID="eb460d38ed530ffd615538bb7baa7581b00e46748a91a7c9f2eff3d9ab864da7" Jan 29 11:27:00 crc kubenswrapper[4593]: I0129 11:27:00.342365 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb460d38ed530ffd615538bb7baa7581b00e46748a91a7c9f2eff3d9ab864da7"} err="failed to get container status \"eb460d38ed530ffd615538bb7baa7581b00e46748a91a7c9f2eff3d9ab864da7\": rpc error: code = NotFound desc = could not find container \"eb460d38ed530ffd615538bb7baa7581b00e46748a91a7c9f2eff3d9ab864da7\": container with ID starting with eb460d38ed530ffd615538bb7baa7581b00e46748a91a7c9f2eff3d9ab864da7 not found: ID does not exist" Jan 29 11:27:01 crc kubenswrapper[4593]: I0129 11:27:01.085387 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86e2d453-9800-4924-84df-86f0f43e5d99" path="/var/lib/kubelet/pods/86e2d453-9800-4924-84df-86f0f43e5d99/volumes" Jan 29 11:27:03 crc kubenswrapper[4593]: I0129 11:27:03.946249 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:27:03 crc kubenswrapper[4593]: I0129 11:27:03.946611 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:27:03 crc kubenswrapper[4593]: I0129 11:27:03.946744 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 11:27:03 crc kubenswrapper[4593]: I0129 11:27:03.947500 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:27:03 crc kubenswrapper[4593]: I0129 11:27:03.947569 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" gracePeriod=600 Jan 29 11:27:04 crc kubenswrapper[4593]: E0129 11:27:04.075589 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:27:04 crc kubenswrapper[4593]: I0129 11:27:04.227359 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" exitCode=0 Jan 29 11:27:04 crc kubenswrapper[4593]: I0129 11:27:04.227455 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec"} Jan 29 11:27:04 crc kubenswrapper[4593]: I0129 11:27:04.227872 4593 scope.go:117] "RemoveContainer" containerID="6f628dc297b127220882a1d8752d50a08dc9b333c2a314b358e3c3d4a79bcfaa" Jan 29 11:27:04 crc kubenswrapper[4593]: I0129 11:27:04.228540 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:27:04 crc kubenswrapper[4593]: E0129 11:27:04.228861 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:27:06 crc kubenswrapper[4593]: I0129 11:27:06.249548 4593 generic.go:334] "Generic (PLEG): container finished" podID="e4241343-d4f5-4690-972e-55f054a93f30" containerID="003e33f77ddab212895fe8ef3045f9e0f29137cf03f6bd5a01a49972f0f487bc" exitCode=0 Jan 29 11:27:06 crc kubenswrapper[4593]: I0129 11:27:06.249592 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" event={"ID":"e4241343-d4f5-4690-972e-55f054a93f30","Type":"ContainerDied","Data":"003e33f77ddab212895fe8ef3045f9e0f29137cf03f6bd5a01a49972f0f487bc"} Jan 29 11:27:07 crc kubenswrapper[4593]: I0129 11:27:07.703195 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" Jan 29 11:27:07 crc kubenswrapper[4593]: I0129 11:27:07.742188 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-inventory\") pod \"e4241343-d4f5-4690-972e-55f054a93f30\" (UID: \"e4241343-d4f5-4690-972e-55f054a93f30\") " Jan 29 11:27:07 crc kubenswrapper[4593]: I0129 11:27:07.742392 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-bootstrap-combined-ca-bundle\") pod \"e4241343-d4f5-4690-972e-55f054a93f30\" (UID: \"e4241343-d4f5-4690-972e-55f054a93f30\") " Jan 29 11:27:07 crc kubenswrapper[4593]: I0129 11:27:07.742453 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8jfj\" (UniqueName: \"kubernetes.io/projected/e4241343-d4f5-4690-972e-55f054a93f30-kube-api-access-s8jfj\") pod \"e4241343-d4f5-4690-972e-55f054a93f30\" (UID: \"e4241343-d4f5-4690-972e-55f054a93f30\") " Jan 29 11:27:07 crc kubenswrapper[4593]: I0129 11:27:07.742597 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-ssh-key-openstack-edpm-ipam\") pod \"e4241343-d4f5-4690-972e-55f054a93f30\" (UID: \"e4241343-d4f5-4690-972e-55f054a93f30\") " Jan 29 11:27:07 crc kubenswrapper[4593]: I0129 11:27:07.757001 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4241343-d4f5-4690-972e-55f054a93f30-kube-api-access-s8jfj" (OuterVolumeSpecName: "kube-api-access-s8jfj") pod "e4241343-d4f5-4690-972e-55f054a93f30" (UID: "e4241343-d4f5-4690-972e-55f054a93f30"). InnerVolumeSpecName "kube-api-access-s8jfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:27:07 crc kubenswrapper[4593]: I0129 11:27:07.757136 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e4241343-d4f5-4690-972e-55f054a93f30" (UID: "e4241343-d4f5-4690-972e-55f054a93f30"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:27:07 crc kubenswrapper[4593]: I0129 11:27:07.782809 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-inventory" (OuterVolumeSpecName: "inventory") pod "e4241343-d4f5-4690-972e-55f054a93f30" (UID: "e4241343-d4f5-4690-972e-55f054a93f30"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:27:07 crc kubenswrapper[4593]: I0129 11:27:07.785577 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e4241343-d4f5-4690-972e-55f054a93f30" (UID: "e4241343-d4f5-4690-972e-55f054a93f30"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:27:07 crc kubenswrapper[4593]: I0129 11:27:07.846272 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8jfj\" (UniqueName: \"kubernetes.io/projected/e4241343-d4f5-4690-972e-55f054a93f30-kube-api-access-s8jfj\") on node \"crc\" DevicePath \"\"" Jan 29 11:27:07 crc kubenswrapper[4593]: I0129 11:27:07.846541 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:27:07 crc kubenswrapper[4593]: I0129 11:27:07.846675 4593 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 11:27:07 crc kubenswrapper[4593]: I0129 11:27:07.846758 4593 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.273792 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" event={"ID":"e4241343-d4f5-4690-972e-55f054a93f30","Type":"ContainerDied","Data":"927630ede3ceb2d2afac7670352e3381e678c1d8aa9b338fadd8176b90b8c0c9"} Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.273865 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="927630ede3ceb2d2afac7670352e3381e678c1d8aa9b338fadd8176b90b8c0c9" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.273907 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.382377 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j"] Jan 29 11:27:08 crc kubenswrapper[4593]: E0129 11:27:08.383192 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4241343-d4f5-4690-972e-55f054a93f30" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.383216 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4241343-d4f5-4690-972e-55f054a93f30" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 29 11:27:08 crc kubenswrapper[4593]: E0129 11:27:08.383237 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e2d453-9800-4924-84df-86f0f43e5d99" containerName="extract-utilities" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.383246 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e2d453-9800-4924-84df-86f0f43e5d99" containerName="extract-utilities" Jan 29 11:27:08 crc kubenswrapper[4593]: E0129 11:27:08.383255 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e2d453-9800-4924-84df-86f0f43e5d99" containerName="registry-server" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.383264 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e2d453-9800-4924-84df-86f0f43e5d99" containerName="registry-server" Jan 29 11:27:08 crc kubenswrapper[4593]: E0129 11:27:08.383278 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e2d453-9800-4924-84df-86f0f43e5d99" containerName="extract-content" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.383309 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e2d453-9800-4924-84df-86f0f43e5d99" containerName="extract-content" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.383586 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="86e2d453-9800-4924-84df-86f0f43e5d99" containerName="registry-server" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.383608 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4241343-d4f5-4690-972e-55f054a93f30" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.386866 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.389929 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.389982 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.390158 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.390300 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w4p8f" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.396090 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j"] Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.459735 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lpsk\" (UniqueName: \"kubernetes.io/projected/fee0ef55-8edb-456c-9344-98a3b34d3aab-kube-api-access-4lpsk\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g462j\" (UID: \"fee0ef55-8edb-456c-9344-98a3b34d3aab\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.459971 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fee0ef55-8edb-456c-9344-98a3b34d3aab-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g462j\" (UID: \"fee0ef55-8edb-456c-9344-98a3b34d3aab\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.460120 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fee0ef55-8edb-456c-9344-98a3b34d3aab-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g462j\" (UID: \"fee0ef55-8edb-456c-9344-98a3b34d3aab\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.562131 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lpsk\" (UniqueName: \"kubernetes.io/projected/fee0ef55-8edb-456c-9344-98a3b34d3aab-kube-api-access-4lpsk\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g462j\" (UID: \"fee0ef55-8edb-456c-9344-98a3b34d3aab\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.562221 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fee0ef55-8edb-456c-9344-98a3b34d3aab-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g462j\" (UID: \"fee0ef55-8edb-456c-9344-98a3b34d3aab\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.562261 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fee0ef55-8edb-456c-9344-98a3b34d3aab-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g462j\" (UID: \"fee0ef55-8edb-456c-9344-98a3b34d3aab\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.572070 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fee0ef55-8edb-456c-9344-98a3b34d3aab-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g462j\" (UID: \"fee0ef55-8edb-456c-9344-98a3b34d3aab\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.581059 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lpsk\" (UniqueName: \"kubernetes.io/projected/fee0ef55-8edb-456c-9344-98a3b34d3aab-kube-api-access-4lpsk\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g462j\" (UID: \"fee0ef55-8edb-456c-9344-98a3b34d3aab\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.583106 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fee0ef55-8edb-456c-9344-98a3b34d3aab-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g462j\" (UID: \"fee0ef55-8edb-456c-9344-98a3b34d3aab\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.706375 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" Jan 29 11:27:09 crc kubenswrapper[4593]: I0129 11:27:09.248081 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j"] Jan 29 11:27:09 crc kubenswrapper[4593]: I0129 11:27:09.283148 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" event={"ID":"fee0ef55-8edb-456c-9344-98a3b34d3aab","Type":"ContainerStarted","Data":"b0ae0b25831e041bfe96f6c4a3d79e01d947c880509926da1feb03c9559ebd7a"} Jan 29 11:27:11 crc kubenswrapper[4593]: I0129 11:27:11.343553 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" event={"ID":"fee0ef55-8edb-456c-9344-98a3b34d3aab","Type":"ContainerStarted","Data":"5c199554479c727e40d38e1c73ab1886c6ddf721c6751444cd8da17a69216ec5"} Jan 29 11:27:11 crc kubenswrapper[4593]: I0129 11:27:11.370944 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" podStartSLOduration=2.268501322 podStartE2EDuration="3.370892542s" podCreationTimestamp="2026-01-29 11:27:08 +0000 UTC" firstStartedPulling="2026-01-29 11:27:09.250961261 +0000 UTC m=+1695.123995462" lastFinishedPulling="2026-01-29 11:27:10.353352491 +0000 UTC m=+1696.226386682" observedRunningTime="2026-01-29 11:27:11.360813249 +0000 UTC m=+1697.233847440" watchObservedRunningTime="2026-01-29 11:27:11.370892542 +0000 UTC m=+1697.243926733" Jan 29 11:27:13 crc kubenswrapper[4593]: I0129 11:27:13.205112 4593 scope.go:117] "RemoveContainer" containerID="660df2719e4927e909a269c0af10ce5b75a1a0017c3734f8e647f89f3520914c" Jan 29 11:27:15 crc kubenswrapper[4593]: I0129 11:27:15.065553 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-8z7b6"] Jan 29 11:27:15 crc kubenswrapper[4593]: I0129 11:27:15.081315 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:27:15 crc kubenswrapper[4593]: E0129 11:27:15.081652 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:27:15 crc kubenswrapper[4593]: I0129 11:27:15.100146 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-dd7hj"] Jan 29 11:27:15 crc kubenswrapper[4593]: I0129 11:27:15.100194 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-dd7hj"] Jan 29 11:27:15 crc kubenswrapper[4593]: I0129 11:27:15.106824 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-8z7b6"] Jan 29 11:27:16 crc kubenswrapper[4593]: I0129 11:27:16.036565 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-db54x"] Jan 29 11:27:16 crc kubenswrapper[4593]: I0129 11:27:16.047253 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-db54x"] Jan 29 11:27:17 crc kubenswrapper[4593]: I0129 11:27:17.086734 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31f590aa-412a-41ab-92fd-2202c9b456b4" path="/var/lib/kubelet/pods/31f590aa-412a-41ab-92fd-2202c9b456b4/volumes" Jan 29 11:27:17 crc kubenswrapper[4593]: I0129 11:27:17.087418 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9" path="/var/lib/kubelet/pods/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9/volumes" Jan 29 11:27:17 crc kubenswrapper[4593]: I0129 11:27:17.088040 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6bbbb39-f79c-4647-976b-6225ac21e63b" path="/var/lib/kubelet/pods/a6bbbb39-f79c-4647-976b-6225ac21e63b/volumes" Jan 29 11:27:24 crc kubenswrapper[4593]: I0129 11:27:24.034047 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-2wbrt"] Jan 29 11:27:24 crc kubenswrapper[4593]: I0129 11:27:24.046701 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-2wbrt"] Jan 29 11:27:25 crc kubenswrapper[4593]: I0129 11:27:25.086062 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c39458c0-d624-4ed0-8444-417e479028d2" path="/var/lib/kubelet/pods/c39458c0-d624-4ed0-8444-417e479028d2/volumes" Jan 29 11:27:27 crc kubenswrapper[4593]: I0129 11:27:27.075283 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:27:27 crc kubenswrapper[4593]: E0129 11:27:27.076856 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:27:31 crc kubenswrapper[4593]: I0129 11:27:31.042811 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-qqbm9"] Jan 29 11:27:31 crc kubenswrapper[4593]: I0129 11:27:31.053826 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-qqbm9"] Jan 29 11:27:31 crc kubenswrapper[4593]: I0129 11:27:31.086306 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a0467fe-4786-4231-bf52-8a305e9a4f89" path="/var/lib/kubelet/pods/9a0467fe-4786-4231-bf52-8a305e9a4f89/volumes" Jan 29 11:27:40 crc kubenswrapper[4593]: I0129 11:27:40.074837 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:27:40 crc kubenswrapper[4593]: E0129 11:27:40.076654 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:27:53 crc kubenswrapper[4593]: I0129 11:27:53.075693 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:27:53 crc kubenswrapper[4593]: E0129 11:27:53.077788 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:27:57 crc kubenswrapper[4593]: I0129 11:27:57.064600 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-qt4jn"] Jan 29 11:27:57 crc kubenswrapper[4593]: I0129 11:27:57.082557 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-qt4jn"] Jan 29 11:27:57 crc kubenswrapper[4593]: I0129 11:27:57.108213 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1563c063-cd19-4793-97c0-45ca3e4a3e0c" path="/var/lib/kubelet/pods/1563c063-cd19-4793-97c0-45ca3e4a3e0c/volumes" Jan 29 11:28:04 crc kubenswrapper[4593]: I0129 11:28:04.075939 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:28:04 crc kubenswrapper[4593]: E0129 11:28:04.076790 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:28:13 crc kubenswrapper[4593]: I0129 11:28:13.299268 4593 scope.go:117] "RemoveContainer" containerID="06197cae1e3adecc87ccca3058356e85b083a773c3ebd8eeabc6c5475d59dd8e" Jan 29 11:28:13 crc kubenswrapper[4593]: I0129 11:28:13.346510 4593 scope.go:117] "RemoveContainer" containerID="dc02c784a57ca12374f0aced757e32f43b54151f61a6897de1dd6a96f158aedc" Jan 29 11:28:13 crc kubenswrapper[4593]: I0129 11:28:13.402785 4593 scope.go:117] "RemoveContainer" containerID="0f2f3f0be6cdd2683b007fbff3ab49a0dd093c0aa8e7bd19c6543357b5ba29b3" Jan 29 11:28:13 crc kubenswrapper[4593]: I0129 11:28:13.469415 4593 scope.go:117] "RemoveContainer" containerID="b6f550864b30cf24b91a51e513d7e513cf9d2ef7137812c6edc720f9813967f9" Jan 29 11:28:13 crc kubenswrapper[4593]: I0129 11:28:13.512127 4593 scope.go:117] "RemoveContainer" containerID="99ff344d90d5bdd893d1e77e101cd6e34638c02acf7127cecbfee61fab7d69ad" Jan 29 11:28:13 crc kubenswrapper[4593]: I0129 11:28:13.560025 4593 scope.go:117] "RemoveContainer" containerID="6029f6551650b545bead0d4f37b1f5f3a81f76cf7f6f139456a1354a00bcaf99" Jan 29 11:28:15 crc kubenswrapper[4593]: I0129 11:28:15.111654 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:28:15 crc kubenswrapper[4593]: E0129 11:28:15.113347 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:28:28 crc kubenswrapper[4593]: I0129 11:28:28.075297 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:28:28 crc kubenswrapper[4593]: E0129 11:28:28.076433 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:28:39 crc kubenswrapper[4593]: I0129 11:28:39.075790 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:28:39 crc kubenswrapper[4593]: E0129 11:28:39.076676 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:28:51 crc kubenswrapper[4593]: I0129 11:28:51.075169 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:28:51 crc kubenswrapper[4593]: E0129 11:28:51.076024 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:29:03 crc kubenswrapper[4593]: I0129 11:29:03.075802 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:29:03 crc kubenswrapper[4593]: E0129 11:29:03.076723 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:29:04 crc kubenswrapper[4593]: I0129 11:29:04.051095 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-02db-account-create-update-8h7xj"] Jan 29 11:29:04 crc kubenswrapper[4593]: I0129 11:29:04.062514 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-02db-account-create-update-8h7xj"] Jan 29 11:29:05 crc kubenswrapper[4593]: I0129 11:29:05.055330 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-207d-account-create-update-n289g"] Jan 29 11:29:05 crc kubenswrapper[4593]: I0129 11:29:05.070386 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-bbb2-account-create-update-nq54g"] Jan 29 11:29:05 crc kubenswrapper[4593]: I0129 11:29:05.092946 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cc0715e-34d0-4d5e-a8cc-5809adc6e264" path="/var/lib/kubelet/pods/3cc0715e-34d0-4d5e-a8cc-5809adc6e264/volumes" Jan 29 11:29:05 crc kubenswrapper[4593]: I0129 11:29:05.098786 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-vfj8w"] Jan 29 11:29:05 crc kubenswrapper[4593]: I0129 11:29:05.100027 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-bbb2-account-create-update-nq54g"] Jan 29 11:29:05 crc kubenswrapper[4593]: I0129 11:29:05.110504 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-86jg9"] Jan 29 11:29:05 crc kubenswrapper[4593]: I0129 11:29:05.117914 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-vfj8w"] Jan 29 11:29:05 crc kubenswrapper[4593]: I0129 11:29:05.127025 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-vpcpg"] Jan 29 11:29:05 crc kubenswrapper[4593]: I0129 11:29:05.135882 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-207d-account-create-update-n289g"] Jan 29 11:29:05 crc kubenswrapper[4593]: I0129 11:29:05.144348 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-86jg9"] Jan 29 11:29:05 crc kubenswrapper[4593]: I0129 11:29:05.152847 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-vpcpg"] Jan 29 11:29:07 crc kubenswrapper[4593]: I0129 11:29:07.095795 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5349ab78-1643-47e8-bfca-20d31e2f459f" path="/var/lib/kubelet/pods/5349ab78-1643-47e8-bfca-20d31e2f459f/volumes" Jan 29 11:29:07 crc kubenswrapper[4593]: I0129 11:29:07.097168 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b37d23e-84cc-4059-a109-18fec66cd168" path="/var/lib/kubelet/pods/6b37d23e-84cc-4059-a109-18fec66cd168/volumes" Jan 29 11:29:07 crc kubenswrapper[4593]: I0129 11:29:07.098360 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c560b58-f036-4946-aca6-d59c9502954e" path="/var/lib/kubelet/pods/8c560b58-f036-4946-aca6-d59c9502954e/volumes" Jan 29 11:29:07 crc kubenswrapper[4593]: I0129 11:29:07.099442 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afd801e2-136a-408b-a7e6-ab9a8dcfdd3b" path="/var/lib/kubelet/pods/afd801e2-136a-408b-a7e6-ab9a8dcfdd3b/volumes" Jan 29 11:29:07 crc kubenswrapper[4593]: I0129 11:29:07.101460 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d60bb61f-5204-4149-9922-70c6b0916c48" path="/var/lib/kubelet/pods/d60bb61f-5204-4149-9922-70c6b0916c48/volumes" Jan 29 11:29:13 crc kubenswrapper[4593]: I0129 11:29:13.799843 4593 scope.go:117] "RemoveContainer" containerID="b4acf56e0984e495aea7b87f5e09b414ac2d3ef8fb7a27a8f9cffdcbe98b5b8c" Jan 29 11:29:13 crc kubenswrapper[4593]: I0129 11:29:13.832521 4593 scope.go:117] "RemoveContainer" containerID="6c0216f7cb045c8475f6c48e3f50c549e3404a77f63e6ee461ea5240850a1620" Jan 29 11:29:13 crc kubenswrapper[4593]: I0129 11:29:13.946614 4593 scope.go:117] "RemoveContainer" containerID="690f9e7a9c00c85e345179d71bb55173000c29b38e2987305e760408ff69f398" Jan 29 11:29:14 crc kubenswrapper[4593]: I0129 11:29:14.023779 4593 scope.go:117] "RemoveContainer" containerID="4617f4b77856e9af93c03f010b2af2c31551118ca1d06a956c46e256c4dacc4c" Jan 29 11:29:14 crc kubenswrapper[4593]: I0129 11:29:14.062201 4593 scope.go:117] "RemoveContainer" containerID="9d911603c45f632b1589627458c99f256ab970b9f33d34d26ebd6abdb5c39ade" Jan 29 11:29:14 crc kubenswrapper[4593]: I0129 11:29:14.105942 4593 scope.go:117] "RemoveContainer" containerID="97bad51c47183a029a20953701c3f31d5be0e445cb1a365cf05eca76d77d4eb6" Jan 29 11:29:16 crc kubenswrapper[4593]: I0129 11:29:16.077123 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:29:16 crc kubenswrapper[4593]: E0129 11:29:16.077726 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:29:31 crc kubenswrapper[4593]: I0129 11:29:31.076023 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:29:31 crc kubenswrapper[4593]: E0129 11:29:31.076921 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:29:34 crc kubenswrapper[4593]: I0129 11:29:34.057672 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" event={"ID":"fee0ef55-8edb-456c-9344-98a3b34d3aab","Type":"ContainerDied","Data":"5c199554479c727e40d38e1c73ab1886c6ddf721c6751444cd8da17a69216ec5"} Jan 29 11:29:34 crc kubenswrapper[4593]: I0129 11:29:34.057616 4593 generic.go:334] "Generic (PLEG): container finished" podID="fee0ef55-8edb-456c-9344-98a3b34d3aab" containerID="5c199554479c727e40d38e1c73ab1886c6ddf721c6751444cd8da17a69216ec5" exitCode=0 Jan 29 11:29:35 crc kubenswrapper[4593]: I0129 11:29:35.488249 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" Jan 29 11:29:35 crc kubenswrapper[4593]: I0129 11:29:35.626908 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fee0ef55-8edb-456c-9344-98a3b34d3aab-ssh-key-openstack-edpm-ipam\") pod \"fee0ef55-8edb-456c-9344-98a3b34d3aab\" (UID: \"fee0ef55-8edb-456c-9344-98a3b34d3aab\") " Jan 29 11:29:35 crc kubenswrapper[4593]: I0129 11:29:35.627433 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fee0ef55-8edb-456c-9344-98a3b34d3aab-inventory\") pod \"fee0ef55-8edb-456c-9344-98a3b34d3aab\" (UID: \"fee0ef55-8edb-456c-9344-98a3b34d3aab\") " Jan 29 11:29:35 crc kubenswrapper[4593]: I0129 11:29:35.627549 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lpsk\" (UniqueName: \"kubernetes.io/projected/fee0ef55-8edb-456c-9344-98a3b34d3aab-kube-api-access-4lpsk\") pod \"fee0ef55-8edb-456c-9344-98a3b34d3aab\" (UID: \"fee0ef55-8edb-456c-9344-98a3b34d3aab\") " Jan 29 11:29:35 crc kubenswrapper[4593]: I0129 11:29:35.638925 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fee0ef55-8edb-456c-9344-98a3b34d3aab-kube-api-access-4lpsk" (OuterVolumeSpecName: "kube-api-access-4lpsk") pod "fee0ef55-8edb-456c-9344-98a3b34d3aab" (UID: "fee0ef55-8edb-456c-9344-98a3b34d3aab"). InnerVolumeSpecName "kube-api-access-4lpsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:29:35 crc kubenswrapper[4593]: I0129 11:29:35.658490 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fee0ef55-8edb-456c-9344-98a3b34d3aab-inventory" (OuterVolumeSpecName: "inventory") pod "fee0ef55-8edb-456c-9344-98a3b34d3aab" (UID: "fee0ef55-8edb-456c-9344-98a3b34d3aab"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:29:35 crc kubenswrapper[4593]: I0129 11:29:35.666026 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fee0ef55-8edb-456c-9344-98a3b34d3aab-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fee0ef55-8edb-456c-9344-98a3b34d3aab" (UID: "fee0ef55-8edb-456c-9344-98a3b34d3aab"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:29:35 crc kubenswrapper[4593]: I0129 11:29:35.733433 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fee0ef55-8edb-456c-9344-98a3b34d3aab-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:29:35 crc kubenswrapper[4593]: I0129 11:29:35.733506 4593 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fee0ef55-8edb-456c-9344-98a3b34d3aab-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 11:29:35 crc kubenswrapper[4593]: I0129 11:29:35.733525 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lpsk\" (UniqueName: \"kubernetes.io/projected/fee0ef55-8edb-456c-9344-98a3b34d3aab-kube-api-access-4lpsk\") on node \"crc\" DevicePath \"\"" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.078870 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" event={"ID":"fee0ef55-8edb-456c-9344-98a3b34d3aab","Type":"ContainerDied","Data":"b0ae0b25831e041bfe96f6c4a3d79e01d947c880509926da1feb03c9559ebd7a"} Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.078941 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0ae0b25831e041bfe96f6c4a3d79e01d947c880509926da1feb03c9559ebd7a" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.078983 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.186072 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg"] Jan 29 11:29:36 crc kubenswrapper[4593]: E0129 11:29:36.187293 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fee0ef55-8edb-456c-9344-98a3b34d3aab" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.187448 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee0ef55-8edb-456c-9344-98a3b34d3aab" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.187872 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="fee0ef55-8edb-456c-9344-98a3b34d3aab" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.188978 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.191686 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w4p8f" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.202813 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.203172 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.205393 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg"] Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.205777 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.350858 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/80d7dd41-691a-4411-97c2-91245d43b8ea-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-27mbg\" (UID: \"80d7dd41-691a-4411-97c2-91245d43b8ea\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.351457 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80d7dd41-691a-4411-97c2-91245d43b8ea-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-27mbg\" (UID: \"80d7dd41-691a-4411-97c2-91245d43b8ea\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.351657 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sbm9\" (UniqueName: \"kubernetes.io/projected/80d7dd41-691a-4411-97c2-91245d43b8ea-kube-api-access-9sbm9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-27mbg\" (UID: \"80d7dd41-691a-4411-97c2-91245d43b8ea\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.453329 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80d7dd41-691a-4411-97c2-91245d43b8ea-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-27mbg\" (UID: \"80d7dd41-691a-4411-97c2-91245d43b8ea\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.453698 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sbm9\" (UniqueName: \"kubernetes.io/projected/80d7dd41-691a-4411-97c2-91245d43b8ea-kube-api-access-9sbm9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-27mbg\" (UID: \"80d7dd41-691a-4411-97c2-91245d43b8ea\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.453874 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/80d7dd41-691a-4411-97c2-91245d43b8ea-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-27mbg\" (UID: \"80d7dd41-691a-4411-97c2-91245d43b8ea\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.459580 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/80d7dd41-691a-4411-97c2-91245d43b8ea-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-27mbg\" (UID: \"80d7dd41-691a-4411-97c2-91245d43b8ea\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.460300 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80d7dd41-691a-4411-97c2-91245d43b8ea-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-27mbg\" (UID: \"80d7dd41-691a-4411-97c2-91245d43b8ea\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.469927 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sbm9\" (UniqueName: \"kubernetes.io/projected/80d7dd41-691a-4411-97c2-91245d43b8ea-kube-api-access-9sbm9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-27mbg\" (UID: \"80d7dd41-691a-4411-97c2-91245d43b8ea\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.506449 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" Jan 29 11:29:37 crc kubenswrapper[4593]: I0129 11:29:37.054258 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg"] Jan 29 11:29:37 crc kubenswrapper[4593]: I0129 11:29:37.059079 4593 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 11:29:37 crc kubenswrapper[4593]: I0129 11:29:37.094913 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" event={"ID":"80d7dd41-691a-4411-97c2-91245d43b8ea","Type":"ContainerStarted","Data":"e32031e06aad254861bb54923223ee1752de351cad7516014ab280e7d0197bdf"} Jan 29 11:29:38 crc kubenswrapper[4593]: I0129 11:29:38.109589 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" event={"ID":"80d7dd41-691a-4411-97c2-91245d43b8ea","Type":"ContainerStarted","Data":"58d92a6cf90bfa5b104f1ad9533044c99bc8076e9572dec59724d020f65d5b0d"} Jan 29 11:29:38 crc kubenswrapper[4593]: I0129 11:29:38.128662 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" podStartSLOduration=1.494348211 podStartE2EDuration="2.128598825s" podCreationTimestamp="2026-01-29 11:29:36 +0000 UTC" firstStartedPulling="2026-01-29 11:29:37.058736602 +0000 UTC m=+1842.931770793" lastFinishedPulling="2026-01-29 11:29:37.692987216 +0000 UTC m=+1843.566021407" observedRunningTime="2026-01-29 11:29:38.124860045 +0000 UTC m=+1843.997894246" watchObservedRunningTime="2026-01-29 11:29:38.128598825 +0000 UTC m=+1844.001633016" Jan 29 11:29:46 crc kubenswrapper[4593]: I0129 11:29:46.074380 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:29:46 crc kubenswrapper[4593]: E0129 11:29:46.075083 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:29:48 crc kubenswrapper[4593]: I0129 11:29:48.596066 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" podUID="960bb326-dc22-43e5-bc4f-05c9ce9e26a9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:29:57 crc kubenswrapper[4593]: I0129 11:29:57.075866 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:29:57 crc kubenswrapper[4593]: E0129 11:29:57.076730 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:29:58 crc kubenswrapper[4593]: I0129 11:29:58.055781 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vkj44"] Jan 29 11:29:58 crc kubenswrapper[4593]: I0129 11:29:58.069118 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vkj44"] Jan 29 11:29:59 crc kubenswrapper[4593]: I0129 11:29:59.085778 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a120fd3-e300-459e-9c9b-dd0f3da25621" path="/var/lib/kubelet/pods/9a120fd3-e300-459e-9c9b-dd0f3da25621/volumes" Jan 29 11:30:00 crc kubenswrapper[4593]: I0129 11:30:00.156932 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j"] Jan 29 11:30:00 crc kubenswrapper[4593]: I0129 11:30:00.160786 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j" Jan 29 11:30:00 crc kubenswrapper[4593]: I0129 11:30:00.165017 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 11:30:00 crc kubenswrapper[4593]: I0129 11:30:00.168166 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 11:30:00 crc kubenswrapper[4593]: I0129 11:30:00.172671 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j"] Jan 29 11:30:00 crc kubenswrapper[4593]: I0129 11:30:00.243427 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe3bb310-71b1-4d29-a302-e06181c04f5f-config-volume\") pod \"collect-profiles-29494770-zf92j\" (UID: \"fe3bb310-71b1-4d29-a302-e06181c04f5f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j" Jan 29 11:30:00 crc kubenswrapper[4593]: I0129 11:30:00.243604 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v479h\" (UniqueName: \"kubernetes.io/projected/fe3bb310-71b1-4d29-a302-e06181c04f5f-kube-api-access-v479h\") pod \"collect-profiles-29494770-zf92j\" (UID: \"fe3bb310-71b1-4d29-a302-e06181c04f5f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j" Jan 29 11:30:00 crc kubenswrapper[4593]: I0129 11:30:00.243681 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe3bb310-71b1-4d29-a302-e06181c04f5f-secret-volume\") pod \"collect-profiles-29494770-zf92j\" (UID: \"fe3bb310-71b1-4d29-a302-e06181c04f5f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j" Jan 29 11:30:00 crc kubenswrapper[4593]: I0129 11:30:00.346054 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe3bb310-71b1-4d29-a302-e06181c04f5f-config-volume\") pod \"collect-profiles-29494770-zf92j\" (UID: \"fe3bb310-71b1-4d29-a302-e06181c04f5f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j" Jan 29 11:30:00 crc kubenswrapper[4593]: I0129 11:30:00.346432 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v479h\" (UniqueName: \"kubernetes.io/projected/fe3bb310-71b1-4d29-a302-e06181c04f5f-kube-api-access-v479h\") pod \"collect-profiles-29494770-zf92j\" (UID: \"fe3bb310-71b1-4d29-a302-e06181c04f5f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j" Jan 29 11:30:00 crc kubenswrapper[4593]: I0129 11:30:00.346729 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe3bb310-71b1-4d29-a302-e06181c04f5f-secret-volume\") pod \"collect-profiles-29494770-zf92j\" (UID: \"fe3bb310-71b1-4d29-a302-e06181c04f5f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j" Jan 29 11:30:00 crc kubenswrapper[4593]: I0129 11:30:00.350269 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe3bb310-71b1-4d29-a302-e06181c04f5f-config-volume\") pod \"collect-profiles-29494770-zf92j\" (UID: \"fe3bb310-71b1-4d29-a302-e06181c04f5f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j" Jan 29 11:30:00 crc kubenswrapper[4593]: I0129 11:30:00.363963 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe3bb310-71b1-4d29-a302-e06181c04f5f-secret-volume\") pod \"collect-profiles-29494770-zf92j\" (UID: \"fe3bb310-71b1-4d29-a302-e06181c04f5f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j" Jan 29 11:30:00 crc kubenswrapper[4593]: I0129 11:30:00.368074 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v479h\" (UniqueName: \"kubernetes.io/projected/fe3bb310-71b1-4d29-a302-e06181c04f5f-kube-api-access-v479h\") pod \"collect-profiles-29494770-zf92j\" (UID: \"fe3bb310-71b1-4d29-a302-e06181c04f5f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j" Jan 29 11:30:00 crc kubenswrapper[4593]: I0129 11:30:00.488045 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j" Jan 29 11:30:00 crc kubenswrapper[4593]: I0129 11:30:00.933735 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j"] Jan 29 11:30:01 crc kubenswrapper[4593]: I0129 11:30:01.023934 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j" event={"ID":"fe3bb310-71b1-4d29-a302-e06181c04f5f","Type":"ContainerStarted","Data":"c4c6458cd97ffb2aeecd77496fd68f83d6c2c4298bddc9c470b708adf9f616a5"} Jan 29 11:30:02 crc kubenswrapper[4593]: I0129 11:30:02.050460 4593 generic.go:334] "Generic (PLEG): container finished" podID="fe3bb310-71b1-4d29-a302-e06181c04f5f" containerID="f5dc8ed87db86aba663f3bdc857a868a9a85bafb38e9e0269844cbb77f36242a" exitCode=0 Jan 29 11:30:02 crc kubenswrapper[4593]: I0129 11:30:02.050567 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j" event={"ID":"fe3bb310-71b1-4d29-a302-e06181c04f5f","Type":"ContainerDied","Data":"f5dc8ed87db86aba663f3bdc857a868a9a85bafb38e9e0269844cbb77f36242a"} Jan 29 11:30:03 crc kubenswrapper[4593]: I0129 11:30:03.333505 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j" Jan 29 11:30:03 crc kubenswrapper[4593]: I0129 11:30:03.410658 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v479h\" (UniqueName: \"kubernetes.io/projected/fe3bb310-71b1-4d29-a302-e06181c04f5f-kube-api-access-v479h\") pod \"fe3bb310-71b1-4d29-a302-e06181c04f5f\" (UID: \"fe3bb310-71b1-4d29-a302-e06181c04f5f\") " Jan 29 11:30:03 crc kubenswrapper[4593]: I0129 11:30:03.410872 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe3bb310-71b1-4d29-a302-e06181c04f5f-secret-volume\") pod \"fe3bb310-71b1-4d29-a302-e06181c04f5f\" (UID: \"fe3bb310-71b1-4d29-a302-e06181c04f5f\") " Jan 29 11:30:03 crc kubenswrapper[4593]: I0129 11:30:03.410901 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe3bb310-71b1-4d29-a302-e06181c04f5f-config-volume\") pod \"fe3bb310-71b1-4d29-a302-e06181c04f5f\" (UID: \"fe3bb310-71b1-4d29-a302-e06181c04f5f\") " Jan 29 11:30:03 crc kubenswrapper[4593]: I0129 11:30:03.412202 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe3bb310-71b1-4d29-a302-e06181c04f5f-config-volume" (OuterVolumeSpecName: "config-volume") pod "fe3bb310-71b1-4d29-a302-e06181c04f5f" (UID: "fe3bb310-71b1-4d29-a302-e06181c04f5f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:30:03 crc kubenswrapper[4593]: I0129 11:30:03.415904 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe3bb310-71b1-4d29-a302-e06181c04f5f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fe3bb310-71b1-4d29-a302-e06181c04f5f" (UID: "fe3bb310-71b1-4d29-a302-e06181c04f5f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:30:03 crc kubenswrapper[4593]: I0129 11:30:03.418404 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe3bb310-71b1-4d29-a302-e06181c04f5f-kube-api-access-v479h" (OuterVolumeSpecName: "kube-api-access-v479h") pod "fe3bb310-71b1-4d29-a302-e06181c04f5f" (UID: "fe3bb310-71b1-4d29-a302-e06181c04f5f"). InnerVolumeSpecName "kube-api-access-v479h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:30:03 crc kubenswrapper[4593]: I0129 11:30:03.512714 4593 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe3bb310-71b1-4d29-a302-e06181c04f5f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:03 crc kubenswrapper[4593]: I0129 11:30:03.513001 4593 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe3bb310-71b1-4d29-a302-e06181c04f5f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:03 crc kubenswrapper[4593]: I0129 11:30:03.513068 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v479h\" (UniqueName: \"kubernetes.io/projected/fe3bb310-71b1-4d29-a302-e06181c04f5f-kube-api-access-v479h\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:04 crc kubenswrapper[4593]: I0129 11:30:04.070451 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j" event={"ID":"fe3bb310-71b1-4d29-a302-e06181c04f5f","Type":"ContainerDied","Data":"c4c6458cd97ffb2aeecd77496fd68f83d6c2c4298bddc9c470b708adf9f616a5"} Jan 29 11:30:04 crc kubenswrapper[4593]: I0129 11:30:04.070498 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4c6458cd97ffb2aeecd77496fd68f83d6c2c4298bddc9c470b708adf9f616a5" Jan 29 11:30:04 crc kubenswrapper[4593]: I0129 11:30:04.070521 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j" Jan 29 11:30:12 crc kubenswrapper[4593]: I0129 11:30:12.074847 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:30:12 crc kubenswrapper[4593]: E0129 11:30:12.075618 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:30:14 crc kubenswrapper[4593]: I0129 11:30:14.344045 4593 scope.go:117] "RemoveContainer" containerID="81d2ae81ac7fd09960ec8dcecfdd7fb40c2612e8262393b7c2c13c07e2588b6b" Jan 29 11:30:25 crc kubenswrapper[4593]: I0129 11:30:25.081176 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:30:25 crc kubenswrapper[4593]: E0129 11:30:25.081930 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:30:30 crc kubenswrapper[4593]: I0129 11:30:30.053562 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-jfk6z"] Jan 29 11:30:30 crc kubenswrapper[4593]: I0129 11:30:30.065852 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-jfk6z"] Jan 29 11:30:31 crc kubenswrapper[4593]: I0129 11:30:31.086724 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecc4cd76-a47d-4691-906f-d1617455f100" path="/var/lib/kubelet/pods/ecc4cd76-a47d-4691-906f-d1617455f100/volumes" Jan 29 11:30:40 crc kubenswrapper[4593]: I0129 11:30:40.074869 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:30:40 crc kubenswrapper[4593]: E0129 11:30:40.075877 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:30:42 crc kubenswrapper[4593]: I0129 11:30:42.035882 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wc9fh"] Jan 29 11:30:42 crc kubenswrapper[4593]: I0129 11:30:42.068180 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wc9fh"] Jan 29 11:30:43 crc kubenswrapper[4593]: I0129 11:30:43.008599 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-82d5x"] Jan 29 11:30:43 crc kubenswrapper[4593]: E0129 11:30:43.009487 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3bb310-71b1-4d29-a302-e06181c04f5f" containerName="collect-profiles" Jan 29 11:30:43 crc kubenswrapper[4593]: I0129 11:30:43.009512 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3bb310-71b1-4d29-a302-e06181c04f5f" containerName="collect-profiles" Jan 29 11:30:43 crc kubenswrapper[4593]: I0129 11:30:43.009770 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe3bb310-71b1-4d29-a302-e06181c04f5f" containerName="collect-profiles" Jan 29 11:30:43 crc kubenswrapper[4593]: I0129 11:30:43.011529 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:30:43 crc kubenswrapper[4593]: I0129 11:30:43.019381 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-82d5x"] Jan 29 11:30:43 crc kubenswrapper[4593]: I0129 11:30:43.083070 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-catalog-content\") pod \"redhat-operators-82d5x\" (UID: \"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4\") " pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:30:43 crc kubenswrapper[4593]: I0129 11:30:43.083146 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d68k4\" (UniqueName: \"kubernetes.io/projected/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-kube-api-access-d68k4\") pod \"redhat-operators-82d5x\" (UID: \"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4\") " pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:30:43 crc kubenswrapper[4593]: I0129 11:30:43.083274 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-utilities\") pod \"redhat-operators-82d5x\" (UID: \"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4\") " pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:30:43 crc kubenswrapper[4593]: I0129 11:30:43.085603 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4d30b0b-741b-4275-bcd3-65f27a294d54" path="/var/lib/kubelet/pods/c4d30b0b-741b-4275-bcd3-65f27a294d54/volumes" Jan 29 11:30:43 crc kubenswrapper[4593]: I0129 11:30:43.184392 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-utilities\") pod \"redhat-operators-82d5x\" (UID: \"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4\") " pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:30:43 crc kubenswrapper[4593]: I0129 11:30:43.184521 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-catalog-content\") pod \"redhat-operators-82d5x\" (UID: \"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4\") " pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:30:43 crc kubenswrapper[4593]: I0129 11:30:43.184574 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d68k4\" (UniqueName: \"kubernetes.io/projected/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-kube-api-access-d68k4\") pod \"redhat-operators-82d5x\" (UID: \"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4\") " pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:30:43 crc kubenswrapper[4593]: I0129 11:30:43.184901 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-utilities\") pod \"redhat-operators-82d5x\" (UID: \"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4\") " pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:30:43 crc kubenswrapper[4593]: I0129 11:30:43.185275 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-catalog-content\") pod \"redhat-operators-82d5x\" (UID: \"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4\") " pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:30:43 crc kubenswrapper[4593]: I0129 11:30:43.217702 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d68k4\" (UniqueName: \"kubernetes.io/projected/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-kube-api-access-d68k4\") pod \"redhat-operators-82d5x\" (UID: \"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4\") " pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:30:43 crc kubenswrapper[4593]: I0129 11:30:43.336001 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:30:44 crc kubenswrapper[4593]: I0129 11:30:44.135704 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-82d5x"] Jan 29 11:30:44 crc kubenswrapper[4593]: I0129 11:30:44.386812 4593 generic.go:334] "Generic (PLEG): container finished" podID="2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" containerID="5851c2e92ef970071e21e6f7ee7488e1b52c34d93589113c2dbf8c2bd01fe8db" exitCode=0 Jan 29 11:30:44 crc kubenswrapper[4593]: I0129 11:30:44.387024 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82d5x" event={"ID":"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4","Type":"ContainerDied","Data":"5851c2e92ef970071e21e6f7ee7488e1b52c34d93589113c2dbf8c2bd01fe8db"} Jan 29 11:30:44 crc kubenswrapper[4593]: I0129 11:30:44.387113 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82d5x" event={"ID":"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4","Type":"ContainerStarted","Data":"a00c471bfe7ad5fb5e04c038f64e41f5f6ca0e1837c2dd3dfeed096385c3abac"} Jan 29 11:30:46 crc kubenswrapper[4593]: I0129 11:30:46.404903 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82d5x" event={"ID":"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4","Type":"ContainerStarted","Data":"c8f55144c410fc0c73e18e7e79503904faa947680d4d49769f69371b1ac60759"} Jan 29 11:30:52 crc kubenswrapper[4593]: I0129 11:30:52.455367 4593 generic.go:334] "Generic (PLEG): container finished" podID="80d7dd41-691a-4411-97c2-91245d43b8ea" containerID="58d92a6cf90bfa5b104f1ad9533044c99bc8076e9572dec59724d020f65d5b0d" exitCode=0 Jan 29 11:30:52 crc kubenswrapper[4593]: I0129 11:30:52.455454 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" event={"ID":"80d7dd41-691a-4411-97c2-91245d43b8ea","Type":"ContainerDied","Data":"58d92a6cf90bfa5b104f1ad9533044c99bc8076e9572dec59724d020f65d5b0d"} Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.075963 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:30:54 crc kubenswrapper[4593]: E0129 11:30:54.077712 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.123009 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.326743 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80d7dd41-691a-4411-97c2-91245d43b8ea-inventory\") pod \"80d7dd41-691a-4411-97c2-91245d43b8ea\" (UID: \"80d7dd41-691a-4411-97c2-91245d43b8ea\") " Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.327156 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sbm9\" (UniqueName: \"kubernetes.io/projected/80d7dd41-691a-4411-97c2-91245d43b8ea-kube-api-access-9sbm9\") pod \"80d7dd41-691a-4411-97c2-91245d43b8ea\" (UID: \"80d7dd41-691a-4411-97c2-91245d43b8ea\") " Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.327454 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/80d7dd41-691a-4411-97c2-91245d43b8ea-ssh-key-openstack-edpm-ipam\") pod \"80d7dd41-691a-4411-97c2-91245d43b8ea\" (UID: \"80d7dd41-691a-4411-97c2-91245d43b8ea\") " Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.338026 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80d7dd41-691a-4411-97c2-91245d43b8ea-kube-api-access-9sbm9" (OuterVolumeSpecName: "kube-api-access-9sbm9") pod "80d7dd41-691a-4411-97c2-91245d43b8ea" (UID: "80d7dd41-691a-4411-97c2-91245d43b8ea"). InnerVolumeSpecName "kube-api-access-9sbm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.360702 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80d7dd41-691a-4411-97c2-91245d43b8ea-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "80d7dd41-691a-4411-97c2-91245d43b8ea" (UID: "80d7dd41-691a-4411-97c2-91245d43b8ea"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.377079 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80d7dd41-691a-4411-97c2-91245d43b8ea-inventory" (OuterVolumeSpecName: "inventory") pod "80d7dd41-691a-4411-97c2-91245d43b8ea" (UID: "80d7dd41-691a-4411-97c2-91245d43b8ea"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.430452 4593 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80d7dd41-691a-4411-97c2-91245d43b8ea-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.430509 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sbm9\" (UniqueName: \"kubernetes.io/projected/80d7dd41-691a-4411-97c2-91245d43b8ea-kube-api-access-9sbm9\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.430529 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/80d7dd41-691a-4411-97c2-91245d43b8ea-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.478878 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" event={"ID":"80d7dd41-691a-4411-97c2-91245d43b8ea","Type":"ContainerDied","Data":"e32031e06aad254861bb54923223ee1752de351cad7516014ab280e7d0197bdf"} Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.478922 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e32031e06aad254861bb54923223ee1752de351cad7516014ab280e7d0197bdf" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.478920 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.590194 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p"] Jan 29 11:30:54 crc kubenswrapper[4593]: E0129 11:30:54.590589 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80d7dd41-691a-4411-97c2-91245d43b8ea" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.590605 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="80d7dd41-691a-4411-97c2-91245d43b8ea" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.590850 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="80d7dd41-691a-4411-97c2-91245d43b8ea" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.592485 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.597423 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.598086 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.598086 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.601675 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w4p8f" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.615762 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p"] Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.735452 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kfqq\" (UniqueName: \"kubernetes.io/projected/0f5fb9be-3781-4b9a-96d8-705593907345-kube-api-access-2kfqq\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p\" (UID: \"0f5fb9be-3781-4b9a-96d8-705593907345\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.735619 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f5fb9be-3781-4b9a-96d8-705593907345-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p\" (UID: \"0f5fb9be-3781-4b9a-96d8-705593907345\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.735791 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f5fb9be-3781-4b9a-96d8-705593907345-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p\" (UID: \"0f5fb9be-3781-4b9a-96d8-705593907345\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.837436 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f5fb9be-3781-4b9a-96d8-705593907345-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p\" (UID: \"0f5fb9be-3781-4b9a-96d8-705593907345\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.837554 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kfqq\" (UniqueName: \"kubernetes.io/projected/0f5fb9be-3781-4b9a-96d8-705593907345-kube-api-access-2kfqq\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p\" (UID: \"0f5fb9be-3781-4b9a-96d8-705593907345\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.837736 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f5fb9be-3781-4b9a-96d8-705593907345-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p\" (UID: \"0f5fb9be-3781-4b9a-96d8-705593907345\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.844363 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f5fb9be-3781-4b9a-96d8-705593907345-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p\" (UID: \"0f5fb9be-3781-4b9a-96d8-705593907345\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.851768 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f5fb9be-3781-4b9a-96d8-705593907345-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p\" (UID: \"0f5fb9be-3781-4b9a-96d8-705593907345\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.855833 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kfqq\" (UniqueName: \"kubernetes.io/projected/0f5fb9be-3781-4b9a-96d8-705593907345-kube-api-access-2kfqq\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p\" (UID: \"0f5fb9be-3781-4b9a-96d8-705593907345\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.912260 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" Jan 29 11:30:55 crc kubenswrapper[4593]: I0129 11:30:55.552515 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p"] Jan 29 11:30:56 crc kubenswrapper[4593]: I0129 11:30:56.500619 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" event={"ID":"0f5fb9be-3781-4b9a-96d8-705593907345","Type":"ContainerStarted","Data":"377ce67068eb512799c63a093c00caf7f33bcd4e9f3a083a6f4884d34e4e543d"} Jan 29 11:30:56 crc kubenswrapper[4593]: I0129 11:30:56.503733 4593 generic.go:334] "Generic (PLEG): container finished" podID="2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" containerID="c8f55144c410fc0c73e18e7e79503904faa947680d4d49769f69371b1ac60759" exitCode=0 Jan 29 11:30:56 crc kubenswrapper[4593]: I0129 11:30:56.503789 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82d5x" event={"ID":"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4","Type":"ContainerDied","Data":"c8f55144c410fc0c73e18e7e79503904faa947680d4d49769f69371b1ac60759"} Jan 29 11:30:57 crc kubenswrapper[4593]: I0129 11:30:57.449893 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:30:58 crc kubenswrapper[4593]: I0129 11:30:58.532884 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82d5x" event={"ID":"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4","Type":"ContainerStarted","Data":"e0ba56a13f861a40db48db95fd5e2b9c4559954f38b81bb8acac7288653cf17f"} Jan 29 11:30:58 crc kubenswrapper[4593]: I0129 11:30:58.536738 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" event={"ID":"0f5fb9be-3781-4b9a-96d8-705593907345","Type":"ContainerStarted","Data":"48cd5db24f135f274647760a88e09cee1d55032bbbad248fe310a7bb592d3aca"} Jan 29 11:30:58 crc kubenswrapper[4593]: I0129 11:30:58.561807 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-82d5x" podStartSLOduration=3.390233307 podStartE2EDuration="16.561789227s" podCreationTimestamp="2026-01-29 11:30:42 +0000 UTC" firstStartedPulling="2026-01-29 11:30:44.388864772 +0000 UTC m=+1910.261898963" lastFinishedPulling="2026-01-29 11:30:57.560420692 +0000 UTC m=+1923.433454883" observedRunningTime="2026-01-29 11:30:58.55822113 +0000 UTC m=+1924.431255321" watchObservedRunningTime="2026-01-29 11:30:58.561789227 +0000 UTC m=+1924.434823418" Jan 29 11:30:58 crc kubenswrapper[4593]: I0129 11:30:58.591882 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" podStartSLOduration=2.707972421 podStartE2EDuration="4.591853992s" podCreationTimestamp="2026-01-29 11:30:54 +0000 UTC" firstStartedPulling="2026-01-29 11:30:55.561912473 +0000 UTC m=+1921.434946664" lastFinishedPulling="2026-01-29 11:30:57.445794044 +0000 UTC m=+1923.318828235" observedRunningTime="2026-01-29 11:30:58.581479541 +0000 UTC m=+1924.454513732" watchObservedRunningTime="2026-01-29 11:30:58.591853992 +0000 UTC m=+1924.464888183" Jan 29 11:31:03 crc kubenswrapper[4593]: I0129 11:31:03.336154 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:31:03 crc kubenswrapper[4593]: I0129 11:31:03.337392 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:31:03 crc kubenswrapper[4593]: I0129 11:31:03.591348 4593 generic.go:334] "Generic (PLEG): container finished" podID="0f5fb9be-3781-4b9a-96d8-705593907345" containerID="48cd5db24f135f274647760a88e09cee1d55032bbbad248fe310a7bb592d3aca" exitCode=0 Jan 29 11:31:03 crc kubenswrapper[4593]: I0129 11:31:03.591425 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" event={"ID":"0f5fb9be-3781-4b9a-96d8-705593907345","Type":"ContainerDied","Data":"48cd5db24f135f274647760a88e09cee1d55032bbbad248fe310a7bb592d3aca"} Jan 29 11:31:04 crc kubenswrapper[4593]: I0129 11:31:04.389883 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-82d5x" podUID="2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" containerName="registry-server" probeResult="failure" output=< Jan 29 11:31:04 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:31:04 crc kubenswrapper[4593]: > Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.023741 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.081507 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:31:05 crc kubenswrapper[4593]: E0129 11:31:05.081835 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.160098 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f5fb9be-3781-4b9a-96d8-705593907345-ssh-key-openstack-edpm-ipam\") pod \"0f5fb9be-3781-4b9a-96d8-705593907345\" (UID: \"0f5fb9be-3781-4b9a-96d8-705593907345\") " Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.160377 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f5fb9be-3781-4b9a-96d8-705593907345-inventory\") pod \"0f5fb9be-3781-4b9a-96d8-705593907345\" (UID: \"0f5fb9be-3781-4b9a-96d8-705593907345\") " Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.160434 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kfqq\" (UniqueName: \"kubernetes.io/projected/0f5fb9be-3781-4b9a-96d8-705593907345-kube-api-access-2kfqq\") pod \"0f5fb9be-3781-4b9a-96d8-705593907345\" (UID: \"0f5fb9be-3781-4b9a-96d8-705593907345\") " Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.169939 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f5fb9be-3781-4b9a-96d8-705593907345-kube-api-access-2kfqq" (OuterVolumeSpecName: "kube-api-access-2kfqq") pod "0f5fb9be-3781-4b9a-96d8-705593907345" (UID: "0f5fb9be-3781-4b9a-96d8-705593907345"). InnerVolumeSpecName "kube-api-access-2kfqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.189485 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f5fb9be-3781-4b9a-96d8-705593907345-inventory" (OuterVolumeSpecName: "inventory") pod "0f5fb9be-3781-4b9a-96d8-705593907345" (UID: "0f5fb9be-3781-4b9a-96d8-705593907345"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.202259 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f5fb9be-3781-4b9a-96d8-705593907345-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0f5fb9be-3781-4b9a-96d8-705593907345" (UID: "0f5fb9be-3781-4b9a-96d8-705593907345"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.263532 4593 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f5fb9be-3781-4b9a-96d8-705593907345-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.263570 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kfqq\" (UniqueName: \"kubernetes.io/projected/0f5fb9be-3781-4b9a-96d8-705593907345-kube-api-access-2kfqq\") on node \"crc\" DevicePath \"\"" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.263586 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f5fb9be-3781-4b9a-96d8-705593907345-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.611025 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" event={"ID":"0f5fb9be-3781-4b9a-96d8-705593907345","Type":"ContainerDied","Data":"377ce67068eb512799c63a093c00caf7f33bcd4e9f3a083a6f4884d34e4e543d"} Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.611071 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.611087 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="377ce67068eb512799c63a093c00caf7f33bcd4e9f3a083a6f4884d34e4e543d" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.737069 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88"] Jan 29 11:31:05 crc kubenswrapper[4593]: E0129 11:31:05.737510 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5fb9be-3781-4b9a-96d8-705593907345" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.737533 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5fb9be-3781-4b9a-96d8-705593907345" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.737915 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f5fb9be-3781-4b9a-96d8-705593907345" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.738599 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.741503 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w4p8f" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.743228 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.743394 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.743803 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.745446 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88"] Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.873860 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sb7k\" (UniqueName: \"kubernetes.io/projected/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-kube-api-access-7sb7k\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p4f88\" (UID: \"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.874186 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p4f88\" (UID: \"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.874229 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p4f88\" (UID: \"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.975703 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sb7k\" (UniqueName: \"kubernetes.io/projected/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-kube-api-access-7sb7k\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p4f88\" (UID: \"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.975751 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p4f88\" (UID: \"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.976678 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p4f88\" (UID: \"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.980314 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p4f88\" (UID: \"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.981460 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p4f88\" (UID: \"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" Jan 29 11:31:06 crc kubenswrapper[4593]: I0129 11:31:06.001936 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sb7k\" (UniqueName: \"kubernetes.io/projected/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-kube-api-access-7sb7k\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p4f88\" (UID: \"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" Jan 29 11:31:06 crc kubenswrapper[4593]: I0129 11:31:06.078656 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" Jan 29 11:31:06 crc kubenswrapper[4593]: I0129 11:31:06.649228 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88"] Jan 29 11:31:06 crc kubenswrapper[4593]: W0129 11:31:06.657915 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62d982c9_eb7a_4d9d_9cdd_2248c63b06fb.slice/crio-8d5b92283fb5060ef6b06aeb3e80b8769e5866836b6a5ae333ba6bf6faa250d5 WatchSource:0}: Error finding container 8d5b92283fb5060ef6b06aeb3e80b8769e5866836b6a5ae333ba6bf6faa250d5: Status 404 returned error can't find the container with id 8d5b92283fb5060ef6b06aeb3e80b8769e5866836b6a5ae333ba6bf6faa250d5 Jan 29 11:31:07 crc kubenswrapper[4593]: I0129 11:31:07.628719 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" event={"ID":"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb","Type":"ContainerStarted","Data":"8d5b92283fb5060ef6b06aeb3e80b8769e5866836b6a5ae333ba6bf6faa250d5"} Jan 29 11:31:08 crc kubenswrapper[4593]: I0129 11:31:08.639972 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" event={"ID":"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb","Type":"ContainerStarted","Data":"a834152221954d7f1ac3964aed5ebfdb5eb1ef9d8e56af1cff55ac1b4ff20571"} Jan 29 11:31:08 crc kubenswrapper[4593]: I0129 11:31:08.663878 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" podStartSLOduration=2.051789053 podStartE2EDuration="3.663858584s" podCreationTimestamp="2026-01-29 11:31:05 +0000 UTC" firstStartedPulling="2026-01-29 11:31:06.660158216 +0000 UTC m=+1932.533192407" lastFinishedPulling="2026-01-29 11:31:08.272227747 +0000 UTC m=+1934.145261938" observedRunningTime="2026-01-29 11:31:08.661557132 +0000 UTC m=+1934.534591343" watchObservedRunningTime="2026-01-29 11:31:08.663858584 +0000 UTC m=+1934.536892775" Jan 29 11:31:13 crc kubenswrapper[4593]: I0129 11:31:13.048680 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-4klpz"] Jan 29 11:31:13 crc kubenswrapper[4593]: I0129 11:31:13.059471 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-4klpz"] Jan 29 11:31:13 crc kubenswrapper[4593]: I0129 11:31:13.087268 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39f1974c-39c2-48ab-96f4-ad9b138bdd2a" path="/var/lib/kubelet/pods/39f1974c-39c2-48ab-96f4-ad9b138bdd2a/volumes" Jan 29 11:31:14 crc kubenswrapper[4593]: I0129 11:31:14.397540 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-82d5x" podUID="2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" containerName="registry-server" probeResult="failure" output=< Jan 29 11:31:14 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:31:14 crc kubenswrapper[4593]: > Jan 29 11:31:14 crc kubenswrapper[4593]: I0129 11:31:14.464737 4593 scope.go:117] "RemoveContainer" containerID="becc277c4dab17e63d11203d4fe1da3af35724523a182bc72abe031b3a628c8a" Jan 29 11:31:14 crc kubenswrapper[4593]: I0129 11:31:14.545529 4593 scope.go:117] "RemoveContainer" containerID="96bdd94d7fe01d27f9002652fb0e024d5e4216b747eecd5f1013e14f7c20a7f7" Jan 29 11:31:14 crc kubenswrapper[4593]: I0129 11:31:14.604427 4593 scope.go:117] "RemoveContainer" containerID="1ea0d35aaa814eafe90d3b552ce2cc9ecd1b47dc4d9629fa6b4ad38749d52cc1" Jan 29 11:31:17 crc kubenswrapper[4593]: I0129 11:31:17.075478 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:31:17 crc kubenswrapper[4593]: E0129 11:31:17.076263 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:31:24 crc kubenswrapper[4593]: I0129 11:31:24.380591 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-82d5x" podUID="2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" containerName="registry-server" probeResult="failure" output=< Jan 29 11:31:24 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:31:24 crc kubenswrapper[4593]: > Jan 29 11:31:32 crc kubenswrapper[4593]: I0129 11:31:32.075262 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:31:32 crc kubenswrapper[4593]: E0129 11:31:32.076062 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:31:34 crc kubenswrapper[4593]: I0129 11:31:34.397188 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-82d5x" podUID="2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" containerName="registry-server" probeResult="failure" output=< Jan 29 11:31:34 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:31:34 crc kubenswrapper[4593]: > Jan 29 11:31:43 crc kubenswrapper[4593]: I0129 11:31:43.407708 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:31:43 crc kubenswrapper[4593]: I0129 11:31:43.463999 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:31:43 crc kubenswrapper[4593]: I0129 11:31:43.646543 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-82d5x"] Jan 29 11:31:44 crc kubenswrapper[4593]: I0129 11:31:44.075622 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:31:44 crc kubenswrapper[4593]: E0129 11:31:44.075909 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:31:44 crc kubenswrapper[4593]: I0129 11:31:44.948831 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-82d5x" podUID="2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" containerName="registry-server" containerID="cri-o://e0ba56a13f861a40db48db95fd5e2b9c4559954f38b81bb8acac7288653cf17f" gracePeriod=2 Jan 29 11:31:45 crc kubenswrapper[4593]: I0129 11:31:45.412018 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:31:45 crc kubenswrapper[4593]: I0129 11:31:45.535285 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-catalog-content\") pod \"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4\" (UID: \"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4\") " Jan 29 11:31:45 crc kubenswrapper[4593]: I0129 11:31:45.535534 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d68k4\" (UniqueName: \"kubernetes.io/projected/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-kube-api-access-d68k4\") pod \"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4\" (UID: \"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4\") " Jan 29 11:31:45 crc kubenswrapper[4593]: I0129 11:31:45.535718 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-utilities\") pod \"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4\" (UID: \"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4\") " Jan 29 11:31:45 crc kubenswrapper[4593]: I0129 11:31:45.537284 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-utilities" (OuterVolumeSpecName: "utilities") pod "2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" (UID: "2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:31:45 crc kubenswrapper[4593]: I0129 11:31:45.544876 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-kube-api-access-d68k4" (OuterVolumeSpecName: "kube-api-access-d68k4") pod "2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" (UID: "2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4"). InnerVolumeSpecName "kube-api-access-d68k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:31:45 crc kubenswrapper[4593]: I0129 11:31:45.639667 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d68k4\" (UniqueName: \"kubernetes.io/projected/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-kube-api-access-d68k4\") on node \"crc\" DevicePath \"\"" Jan 29 11:31:45 crc kubenswrapper[4593]: I0129 11:31:45.639943 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:31:45 crc kubenswrapper[4593]: I0129 11:31:45.673256 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" (UID: "2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:31:45 crc kubenswrapper[4593]: I0129 11:31:45.742064 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:31:45 crc kubenswrapper[4593]: I0129 11:31:45.959902 4593 generic.go:334] "Generic (PLEG): container finished" podID="2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" containerID="e0ba56a13f861a40db48db95fd5e2b9c4559954f38b81bb8acac7288653cf17f" exitCode=0 Jan 29 11:31:45 crc kubenswrapper[4593]: I0129 11:31:45.959962 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82d5x" event={"ID":"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4","Type":"ContainerDied","Data":"e0ba56a13f861a40db48db95fd5e2b9c4559954f38b81bb8acac7288653cf17f"} Jan 29 11:31:45 crc kubenswrapper[4593]: I0129 11:31:45.960000 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82d5x" event={"ID":"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4","Type":"ContainerDied","Data":"a00c471bfe7ad5fb5e04c038f64e41f5f6ca0e1837c2dd3dfeed096385c3abac"} Jan 29 11:31:45 crc kubenswrapper[4593]: I0129 11:31:45.960030 4593 scope.go:117] "RemoveContainer" containerID="e0ba56a13f861a40db48db95fd5e2b9c4559954f38b81bb8acac7288653cf17f" Jan 29 11:31:45 crc kubenswrapper[4593]: I0129 11:31:45.960031 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:31:46 crc kubenswrapper[4593]: I0129 11:31:46.000782 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-82d5x"] Jan 29 11:31:46 crc kubenswrapper[4593]: I0129 11:31:46.002097 4593 scope.go:117] "RemoveContainer" containerID="c8f55144c410fc0c73e18e7e79503904faa947680d4d49769f69371b1ac60759" Jan 29 11:31:46 crc kubenswrapper[4593]: I0129 11:31:46.010938 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-82d5x"] Jan 29 11:31:46 crc kubenswrapper[4593]: I0129 11:31:46.039111 4593 scope.go:117] "RemoveContainer" containerID="5851c2e92ef970071e21e6f7ee7488e1b52c34d93589113c2dbf8c2bd01fe8db" Jan 29 11:31:46 crc kubenswrapper[4593]: I0129 11:31:46.073105 4593 scope.go:117] "RemoveContainer" containerID="e0ba56a13f861a40db48db95fd5e2b9c4559954f38b81bb8acac7288653cf17f" Jan 29 11:31:46 crc kubenswrapper[4593]: E0129 11:31:46.073556 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0ba56a13f861a40db48db95fd5e2b9c4559954f38b81bb8acac7288653cf17f\": container with ID starting with e0ba56a13f861a40db48db95fd5e2b9c4559954f38b81bb8acac7288653cf17f not found: ID does not exist" containerID="e0ba56a13f861a40db48db95fd5e2b9c4559954f38b81bb8acac7288653cf17f" Jan 29 11:31:46 crc kubenswrapper[4593]: I0129 11:31:46.073678 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0ba56a13f861a40db48db95fd5e2b9c4559954f38b81bb8acac7288653cf17f"} err="failed to get container status \"e0ba56a13f861a40db48db95fd5e2b9c4559954f38b81bb8acac7288653cf17f\": rpc error: code = NotFound desc = could not find container \"e0ba56a13f861a40db48db95fd5e2b9c4559954f38b81bb8acac7288653cf17f\": container with ID starting with e0ba56a13f861a40db48db95fd5e2b9c4559954f38b81bb8acac7288653cf17f not found: ID does not exist" Jan 29 11:31:46 crc kubenswrapper[4593]: I0129 11:31:46.073779 4593 scope.go:117] "RemoveContainer" containerID="c8f55144c410fc0c73e18e7e79503904faa947680d4d49769f69371b1ac60759" Jan 29 11:31:46 crc kubenswrapper[4593]: E0129 11:31:46.075235 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8f55144c410fc0c73e18e7e79503904faa947680d4d49769f69371b1ac60759\": container with ID starting with c8f55144c410fc0c73e18e7e79503904faa947680d4d49769f69371b1ac60759 not found: ID does not exist" containerID="c8f55144c410fc0c73e18e7e79503904faa947680d4d49769f69371b1ac60759" Jan 29 11:31:46 crc kubenswrapper[4593]: I0129 11:31:46.075270 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8f55144c410fc0c73e18e7e79503904faa947680d4d49769f69371b1ac60759"} err="failed to get container status \"c8f55144c410fc0c73e18e7e79503904faa947680d4d49769f69371b1ac60759\": rpc error: code = NotFound desc = could not find container \"c8f55144c410fc0c73e18e7e79503904faa947680d4d49769f69371b1ac60759\": container with ID starting with c8f55144c410fc0c73e18e7e79503904faa947680d4d49769f69371b1ac60759 not found: ID does not exist" Jan 29 11:31:46 crc kubenswrapper[4593]: I0129 11:31:46.075292 4593 scope.go:117] "RemoveContainer" containerID="5851c2e92ef970071e21e6f7ee7488e1b52c34d93589113c2dbf8c2bd01fe8db" Jan 29 11:31:46 crc kubenswrapper[4593]: E0129 11:31:46.075866 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5851c2e92ef970071e21e6f7ee7488e1b52c34d93589113c2dbf8c2bd01fe8db\": container with ID starting with 5851c2e92ef970071e21e6f7ee7488e1b52c34d93589113c2dbf8c2bd01fe8db not found: ID does not exist" containerID="5851c2e92ef970071e21e6f7ee7488e1b52c34d93589113c2dbf8c2bd01fe8db" Jan 29 11:31:46 crc kubenswrapper[4593]: I0129 11:31:46.075893 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5851c2e92ef970071e21e6f7ee7488e1b52c34d93589113c2dbf8c2bd01fe8db"} err="failed to get container status \"5851c2e92ef970071e21e6f7ee7488e1b52c34d93589113c2dbf8c2bd01fe8db\": rpc error: code = NotFound desc = could not find container \"5851c2e92ef970071e21e6f7ee7488e1b52c34d93589113c2dbf8c2bd01fe8db\": container with ID starting with 5851c2e92ef970071e21e6f7ee7488e1b52c34d93589113c2dbf8c2bd01fe8db not found: ID does not exist" Jan 29 11:31:47 crc kubenswrapper[4593]: I0129 11:31:47.088450 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" path="/var/lib/kubelet/pods/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4/volumes" Jan 29 11:31:50 crc kubenswrapper[4593]: I0129 11:31:50.002261 4593 generic.go:334] "Generic (PLEG): container finished" podID="62d982c9-eb7a-4d9d-9cdd-2248c63b06fb" containerID="a834152221954d7f1ac3964aed5ebfdb5eb1ef9d8e56af1cff55ac1b4ff20571" exitCode=0 Jan 29 11:31:50 crc kubenswrapper[4593]: I0129 11:31:50.002342 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" event={"ID":"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb","Type":"ContainerDied","Data":"a834152221954d7f1ac3964aed5ebfdb5eb1ef9d8e56af1cff55ac1b4ff20571"} Jan 29 11:31:51 crc kubenswrapper[4593]: I0129 11:31:51.433420 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" Jan 29 11:31:51 crc kubenswrapper[4593]: I0129 11:31:51.584201 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sb7k\" (UniqueName: \"kubernetes.io/projected/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-kube-api-access-7sb7k\") pod \"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb\" (UID: \"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb\") " Jan 29 11:31:51 crc kubenswrapper[4593]: I0129 11:31:51.584621 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-inventory\") pod \"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb\" (UID: \"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb\") " Jan 29 11:31:51 crc kubenswrapper[4593]: I0129 11:31:51.585441 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-ssh-key-openstack-edpm-ipam\") pod \"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb\" (UID: \"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb\") " Jan 29 11:31:51 crc kubenswrapper[4593]: I0129 11:31:51.607960 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-kube-api-access-7sb7k" (OuterVolumeSpecName: "kube-api-access-7sb7k") pod "62d982c9-eb7a-4d9d-9cdd-2248c63b06fb" (UID: "62d982c9-eb7a-4d9d-9cdd-2248c63b06fb"). InnerVolumeSpecName "kube-api-access-7sb7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:31:51 crc kubenswrapper[4593]: I0129 11:31:51.620116 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "62d982c9-eb7a-4d9d-9cdd-2248c63b06fb" (UID: "62d982c9-eb7a-4d9d-9cdd-2248c63b06fb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:31:51 crc kubenswrapper[4593]: I0129 11:31:51.621538 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-inventory" (OuterVolumeSpecName: "inventory") pod "62d982c9-eb7a-4d9d-9cdd-2248c63b06fb" (UID: "62d982c9-eb7a-4d9d-9cdd-2248c63b06fb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:31:51 crc kubenswrapper[4593]: I0129 11:31:51.687452 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sb7k\" (UniqueName: \"kubernetes.io/projected/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-kube-api-access-7sb7k\") on node \"crc\" DevicePath \"\"" Jan 29 11:31:51 crc kubenswrapper[4593]: I0129 11:31:51.687496 4593 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 11:31:51 crc kubenswrapper[4593]: I0129 11:31:51.687507 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.024850 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" event={"ID":"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb","Type":"ContainerDied","Data":"8d5b92283fb5060ef6b06aeb3e80b8769e5866836b6a5ae333ba6bf6faa250d5"} Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.024884 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d5b92283fb5060ef6b06aeb3e80b8769e5866836b6a5ae333ba6bf6faa250d5" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.025297 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.234291 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5"] Jan 29 11:31:52 crc kubenswrapper[4593]: E0129 11:31:52.234818 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" containerName="extract-utilities" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.234838 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" containerName="extract-utilities" Jan 29 11:31:52 crc kubenswrapper[4593]: E0129 11:31:52.234850 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" containerName="registry-server" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.234857 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" containerName="registry-server" Jan 29 11:31:52 crc kubenswrapper[4593]: E0129 11:31:52.234885 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62d982c9-eb7a-4d9d-9cdd-2248c63b06fb" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.234893 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="62d982c9-eb7a-4d9d-9cdd-2248c63b06fb" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 29 11:31:52 crc kubenswrapper[4593]: E0129 11:31:52.234902 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" containerName="extract-content" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.234908 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" containerName="extract-content" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.235104 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" containerName="registry-server" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.235124 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="62d982c9-eb7a-4d9d-9cdd-2248c63b06fb" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.235773 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.239088 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.239424 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w4p8f" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.239585 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.242317 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.260129 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5"] Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.403148 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmf7x\" (UniqueName: \"kubernetes.io/projected/83fa3cd4-ce6a-44bb-b652-c783504941f9-kube-api-access-cmf7x\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5\" (UID: \"83fa3cd4-ce6a-44bb-b652-c783504941f9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.403241 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83fa3cd4-ce6a-44bb-b652-c783504941f9-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5\" (UID: \"83fa3cd4-ce6a-44bb-b652-c783504941f9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.403440 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83fa3cd4-ce6a-44bb-b652-c783504941f9-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5\" (UID: \"83fa3cd4-ce6a-44bb-b652-c783504941f9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.505680 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmf7x\" (UniqueName: \"kubernetes.io/projected/83fa3cd4-ce6a-44bb-b652-c783504941f9-kube-api-access-cmf7x\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5\" (UID: \"83fa3cd4-ce6a-44bb-b652-c783504941f9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.505763 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83fa3cd4-ce6a-44bb-b652-c783504941f9-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5\" (UID: \"83fa3cd4-ce6a-44bb-b652-c783504941f9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.505873 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83fa3cd4-ce6a-44bb-b652-c783504941f9-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5\" (UID: \"83fa3cd4-ce6a-44bb-b652-c783504941f9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.525115 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83fa3cd4-ce6a-44bb-b652-c783504941f9-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5\" (UID: \"83fa3cd4-ce6a-44bb-b652-c783504941f9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.529413 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83fa3cd4-ce6a-44bb-b652-c783504941f9-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5\" (UID: \"83fa3cd4-ce6a-44bb-b652-c783504941f9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.542737 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmf7x\" (UniqueName: \"kubernetes.io/projected/83fa3cd4-ce6a-44bb-b652-c783504941f9-kube-api-access-cmf7x\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5\" (UID: \"83fa3cd4-ce6a-44bb-b652-c783504941f9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.560460 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" Jan 29 11:31:53 crc kubenswrapper[4593]: I0129 11:31:53.150303 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5"] Jan 29 11:31:54 crc kubenswrapper[4593]: I0129 11:31:54.050123 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" event={"ID":"83fa3cd4-ce6a-44bb-b652-c783504941f9","Type":"ContainerStarted","Data":"f6653ebeeff453ab657fe873f5506c2d5b9c531126438ca29b0e219b1ac1b699"} Jan 29 11:31:55 crc kubenswrapper[4593]: I0129 11:31:55.062616 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" event={"ID":"83fa3cd4-ce6a-44bb-b652-c783504941f9","Type":"ContainerStarted","Data":"00574ec0eb21e974d0ee0f68191e26342a0c84daa7fa9850d309f82ed1b27a97"} Jan 29 11:31:55 crc kubenswrapper[4593]: I0129 11:31:55.105810 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" podStartSLOduration=2.024047303 podStartE2EDuration="3.105788767s" podCreationTimestamp="2026-01-29 11:31:52 +0000 UTC" firstStartedPulling="2026-01-29 11:31:53.163087484 +0000 UTC m=+1979.036121675" lastFinishedPulling="2026-01-29 11:31:54.244828928 +0000 UTC m=+1980.117863139" observedRunningTime="2026-01-29 11:31:55.095954501 +0000 UTC m=+1980.968988692" watchObservedRunningTime="2026-01-29 11:31:55.105788767 +0000 UTC m=+1980.978822948" Jan 29 11:31:57 crc kubenswrapper[4593]: I0129 11:31:57.074908 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:31:57 crc kubenswrapper[4593]: E0129 11:31:57.075553 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:32:11 crc kubenswrapper[4593]: I0129 11:32:11.075869 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:32:12 crc kubenswrapper[4593]: I0129 11:32:12.225227 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"466a93f4cbc41eff7fb78889db6079a8dd1f4541d541aedd9f60554c729b2972"} Jan 29 11:32:46 crc kubenswrapper[4593]: I0129 11:32:46.515936 4593 generic.go:334] "Generic (PLEG): container finished" podID="83fa3cd4-ce6a-44bb-b652-c783504941f9" containerID="00574ec0eb21e974d0ee0f68191e26342a0c84daa7fa9850d309f82ed1b27a97" exitCode=0 Jan 29 11:32:46 crc kubenswrapper[4593]: I0129 11:32:46.516014 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" event={"ID":"83fa3cd4-ce6a-44bb-b652-c783504941f9","Type":"ContainerDied","Data":"00574ec0eb21e974d0ee0f68191e26342a0c84daa7fa9850d309f82ed1b27a97"} Jan 29 11:32:48 crc kubenswrapper[4593]: I0129 11:32:48.682482 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" Jan 29 11:32:48 crc kubenswrapper[4593]: I0129 11:32:48.831534 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmf7x\" (UniqueName: \"kubernetes.io/projected/83fa3cd4-ce6a-44bb-b652-c783504941f9-kube-api-access-cmf7x\") pod \"83fa3cd4-ce6a-44bb-b652-c783504941f9\" (UID: \"83fa3cd4-ce6a-44bb-b652-c783504941f9\") " Jan 29 11:32:48 crc kubenswrapper[4593]: I0129 11:32:48.831867 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83fa3cd4-ce6a-44bb-b652-c783504941f9-ssh-key-openstack-edpm-ipam\") pod \"83fa3cd4-ce6a-44bb-b652-c783504941f9\" (UID: \"83fa3cd4-ce6a-44bb-b652-c783504941f9\") " Jan 29 11:32:48 crc kubenswrapper[4593]: I0129 11:32:48.831988 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83fa3cd4-ce6a-44bb-b652-c783504941f9-inventory\") pod \"83fa3cd4-ce6a-44bb-b652-c783504941f9\" (UID: \"83fa3cd4-ce6a-44bb-b652-c783504941f9\") " Jan 29 11:32:48 crc kubenswrapper[4593]: I0129 11:32:48.840556 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83fa3cd4-ce6a-44bb-b652-c783504941f9-kube-api-access-cmf7x" (OuterVolumeSpecName: "kube-api-access-cmf7x") pod "83fa3cd4-ce6a-44bb-b652-c783504941f9" (UID: "83fa3cd4-ce6a-44bb-b652-c783504941f9"). InnerVolumeSpecName "kube-api-access-cmf7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:32:48 crc kubenswrapper[4593]: I0129 11:32:48.869819 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83fa3cd4-ce6a-44bb-b652-c783504941f9-inventory" (OuterVolumeSpecName: "inventory") pod "83fa3cd4-ce6a-44bb-b652-c783504941f9" (UID: "83fa3cd4-ce6a-44bb-b652-c783504941f9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:32:48 crc kubenswrapper[4593]: I0129 11:32:48.875888 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83fa3cd4-ce6a-44bb-b652-c783504941f9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "83fa3cd4-ce6a-44bb-b652-c783504941f9" (UID: "83fa3cd4-ce6a-44bb-b652-c783504941f9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:32:48 crc kubenswrapper[4593]: I0129 11:32:48.934743 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmf7x\" (UniqueName: \"kubernetes.io/projected/83fa3cd4-ce6a-44bb-b652-c783504941f9-kube-api-access-cmf7x\") on node \"crc\" DevicePath \"\"" Jan 29 11:32:48 crc kubenswrapper[4593]: I0129 11:32:48.934790 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83fa3cd4-ce6a-44bb-b652-c783504941f9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:32:48 crc kubenswrapper[4593]: I0129 11:32:48.934804 4593 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83fa3cd4-ce6a-44bb-b652-c783504941f9-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.337153 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cfk97"] Jan 29 11:32:49 crc kubenswrapper[4593]: E0129 11:32:49.339465 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83fa3cd4-ce6a-44bb-b652-c783504941f9" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.339608 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="83fa3cd4-ce6a-44bb-b652-c783504941f9" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.339973 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="83fa3cd4-ce6a-44bb-b652-c783504941f9" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.340726 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.352329 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cfk97"] Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.443539 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c22e1d76-6585-46e2-9c31-5c002e021882-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cfk97\" (UID: \"c22e1d76-6585-46e2-9c31-5c002e021882\") " pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.443689 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c22e1d76-6585-46e2-9c31-5c002e021882-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cfk97\" (UID: \"c22e1d76-6585-46e2-9c31-5c002e021882\") " pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.443762 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrqtl\" (UniqueName: \"kubernetes.io/projected/c22e1d76-6585-46e2-9c31-5c002e021882-kube-api-access-jrqtl\") pod \"ssh-known-hosts-edpm-deployment-cfk97\" (UID: \"c22e1d76-6585-46e2-9c31-5c002e021882\") " pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.544918 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c22e1d76-6585-46e2-9c31-5c002e021882-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cfk97\" (UID: \"c22e1d76-6585-46e2-9c31-5c002e021882\") " pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.545002 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrqtl\" (UniqueName: \"kubernetes.io/projected/c22e1d76-6585-46e2-9c31-5c002e021882-kube-api-access-jrqtl\") pod \"ssh-known-hosts-edpm-deployment-cfk97\" (UID: \"c22e1d76-6585-46e2-9c31-5c002e021882\") " pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.545104 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c22e1d76-6585-46e2-9c31-5c002e021882-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cfk97\" (UID: \"c22e1d76-6585-46e2-9c31-5c002e021882\") " pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.551430 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c22e1d76-6585-46e2-9c31-5c002e021882-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cfk97\" (UID: \"c22e1d76-6585-46e2-9c31-5c002e021882\") " pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.552319 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c22e1d76-6585-46e2-9c31-5c002e021882-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cfk97\" (UID: \"c22e1d76-6585-46e2-9c31-5c002e021882\") " pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.554584 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" event={"ID":"83fa3cd4-ce6a-44bb-b652-c783504941f9","Type":"ContainerDied","Data":"f6653ebeeff453ab657fe873f5506c2d5b9c531126438ca29b0e219b1ac1b699"} Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.554617 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6653ebeeff453ab657fe873f5506c2d5b9c531126438ca29b0e219b1ac1b699" Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.554684 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.564322 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrqtl\" (UniqueName: \"kubernetes.io/projected/c22e1d76-6585-46e2-9c31-5c002e021882-kube-api-access-jrqtl\") pod \"ssh-known-hosts-edpm-deployment-cfk97\" (UID: \"c22e1d76-6585-46e2-9c31-5c002e021882\") " pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.666235 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" Jan 29 11:32:50 crc kubenswrapper[4593]: I0129 11:32:50.206163 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cfk97"] Jan 29 11:32:50 crc kubenswrapper[4593]: I0129 11:32:50.569605 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" event={"ID":"c22e1d76-6585-46e2-9c31-5c002e021882","Type":"ContainerStarted","Data":"075b8459fc88b5c9f61f00148c508a0e3bb632f0c9eb6956820e3ab0c4348252"} Jan 29 11:32:51 crc kubenswrapper[4593]: I0129 11:32:51.638764 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" event={"ID":"c22e1d76-6585-46e2-9c31-5c002e021882","Type":"ContainerStarted","Data":"0cdfedabb2cb51565fe633b2201e57d5c189e9bb0541113dc3ec3fce82165e56"} Jan 29 11:32:51 crc kubenswrapper[4593]: I0129 11:32:51.672756 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" podStartSLOduration=1.93452673 podStartE2EDuration="2.672729342s" podCreationTimestamp="2026-01-29 11:32:49 +0000 UTC" firstStartedPulling="2026-01-29 11:32:50.206712099 +0000 UTC m=+2036.079746320" lastFinishedPulling="2026-01-29 11:32:50.944914741 +0000 UTC m=+2036.817948932" observedRunningTime="2026-01-29 11:32:51.660411117 +0000 UTC m=+2037.533445318" watchObservedRunningTime="2026-01-29 11:32:51.672729342 +0000 UTC m=+2037.545763543" Jan 29 11:32:57 crc kubenswrapper[4593]: I0129 11:32:57.696693 4593 generic.go:334] "Generic (PLEG): container finished" podID="c22e1d76-6585-46e2-9c31-5c002e021882" containerID="0cdfedabb2cb51565fe633b2201e57d5c189e9bb0541113dc3ec3fce82165e56" exitCode=0 Jan 29 11:32:57 crc kubenswrapper[4593]: I0129 11:32:57.696919 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" event={"ID":"c22e1d76-6585-46e2-9c31-5c002e021882","Type":"ContainerDied","Data":"0cdfedabb2cb51565fe633b2201e57d5c189e9bb0541113dc3ec3fce82165e56"} Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.273828 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.427173 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrqtl\" (UniqueName: \"kubernetes.io/projected/c22e1d76-6585-46e2-9c31-5c002e021882-kube-api-access-jrqtl\") pod \"c22e1d76-6585-46e2-9c31-5c002e021882\" (UID: \"c22e1d76-6585-46e2-9c31-5c002e021882\") " Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.427437 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c22e1d76-6585-46e2-9c31-5c002e021882-ssh-key-openstack-edpm-ipam\") pod \"c22e1d76-6585-46e2-9c31-5c002e021882\" (UID: \"c22e1d76-6585-46e2-9c31-5c002e021882\") " Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.427624 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c22e1d76-6585-46e2-9c31-5c002e021882-inventory-0\") pod \"c22e1d76-6585-46e2-9c31-5c002e021882\" (UID: \"c22e1d76-6585-46e2-9c31-5c002e021882\") " Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.447026 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c22e1d76-6585-46e2-9c31-5c002e021882-kube-api-access-jrqtl" (OuterVolumeSpecName: "kube-api-access-jrqtl") pod "c22e1d76-6585-46e2-9c31-5c002e021882" (UID: "c22e1d76-6585-46e2-9c31-5c002e021882"). InnerVolumeSpecName "kube-api-access-jrqtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.458243 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c22e1d76-6585-46e2-9c31-5c002e021882-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c22e1d76-6585-46e2-9c31-5c002e021882" (UID: "c22e1d76-6585-46e2-9c31-5c002e021882"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.462257 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c22e1d76-6585-46e2-9c31-5c002e021882-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "c22e1d76-6585-46e2-9c31-5c002e021882" (UID: "c22e1d76-6585-46e2-9c31-5c002e021882"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.529741 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c22e1d76-6585-46e2-9c31-5c002e021882-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.529783 4593 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c22e1d76-6585-46e2-9c31-5c002e021882-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.529796 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrqtl\" (UniqueName: \"kubernetes.io/projected/c22e1d76-6585-46e2-9c31-5c002e021882-kube-api-access-jrqtl\") on node \"crc\" DevicePath \"\"" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.722415 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" event={"ID":"c22e1d76-6585-46e2-9c31-5c002e021882","Type":"ContainerDied","Data":"075b8459fc88b5c9f61f00148c508a0e3bb632f0c9eb6956820e3ab0c4348252"} Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.722488 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="075b8459fc88b5c9f61f00148c508a0e3bb632f0c9eb6956820e3ab0c4348252" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.722572 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.818483 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t"] Jan 29 11:32:59 crc kubenswrapper[4593]: E0129 11:32:59.819385 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c22e1d76-6585-46e2-9c31-5c002e021882" containerName="ssh-known-hosts-edpm-deployment" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.819417 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="c22e1d76-6585-46e2-9c31-5c002e021882" containerName="ssh-known-hosts-edpm-deployment" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.819782 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="c22e1d76-6585-46e2-9c31-5c002e021882" containerName="ssh-known-hosts-edpm-deployment" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.821027 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.826546 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.827308 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.829979 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.830900 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w4p8f" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.848356 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t"] Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.945248 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lz46t\" (UID: \"b1f286ec-6f85-44c4-94f5-f66bc21c2a64\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.945351 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lz46t\" (UID: \"b1f286ec-6f85-44c4-94f5-f66bc21c2a64\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.945454 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnhhl\" (UniqueName: \"kubernetes.io/projected/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-kube-api-access-jnhhl\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lz46t\" (UID: \"b1f286ec-6f85-44c4-94f5-f66bc21c2a64\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" Jan 29 11:33:00 crc kubenswrapper[4593]: I0129 11:33:00.047049 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnhhl\" (UniqueName: \"kubernetes.io/projected/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-kube-api-access-jnhhl\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lz46t\" (UID: \"b1f286ec-6f85-44c4-94f5-f66bc21c2a64\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" Jan 29 11:33:00 crc kubenswrapper[4593]: I0129 11:33:00.047150 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lz46t\" (UID: \"b1f286ec-6f85-44c4-94f5-f66bc21c2a64\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" Jan 29 11:33:00 crc kubenswrapper[4593]: I0129 11:33:00.047204 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lz46t\" (UID: \"b1f286ec-6f85-44c4-94f5-f66bc21c2a64\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" Jan 29 11:33:00 crc kubenswrapper[4593]: I0129 11:33:00.051345 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lz46t\" (UID: \"b1f286ec-6f85-44c4-94f5-f66bc21c2a64\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" Jan 29 11:33:00 crc kubenswrapper[4593]: I0129 11:33:00.053481 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lz46t\" (UID: \"b1f286ec-6f85-44c4-94f5-f66bc21c2a64\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" Jan 29 11:33:00 crc kubenswrapper[4593]: I0129 11:33:00.070036 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnhhl\" (UniqueName: \"kubernetes.io/projected/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-kube-api-access-jnhhl\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lz46t\" (UID: \"b1f286ec-6f85-44c4-94f5-f66bc21c2a64\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" Jan 29 11:33:00 crc kubenswrapper[4593]: I0129 11:33:00.144908 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" Jan 29 11:33:00 crc kubenswrapper[4593]: I0129 11:33:00.747172 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t"] Jan 29 11:33:01 crc kubenswrapper[4593]: I0129 11:33:01.748024 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" event={"ID":"b1f286ec-6f85-44c4-94f5-f66bc21c2a64","Type":"ContainerStarted","Data":"12b72897b5f5d11caf6ec17f7553c3a6ceba03b6a70dd8696ec59dda1c8487cb"} Jan 29 11:33:01 crc kubenswrapper[4593]: I0129 11:33:01.748578 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" event={"ID":"b1f286ec-6f85-44c4-94f5-f66bc21c2a64","Type":"ContainerStarted","Data":"946f49a462d783d56d9cb7915ab170aea3fa4354acdbbab852861c916716c3a4"} Jan 29 11:33:01 crc kubenswrapper[4593]: I0129 11:33:01.793437 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" podStartSLOduration=2.205929375 podStartE2EDuration="2.793390831s" podCreationTimestamp="2026-01-29 11:32:59 +0000 UTC" firstStartedPulling="2026-01-29 11:33:00.740319093 +0000 UTC m=+2046.613353294" lastFinishedPulling="2026-01-29 11:33:01.327780559 +0000 UTC m=+2047.200814750" observedRunningTime="2026-01-29 11:33:01.784190442 +0000 UTC m=+2047.657224633" watchObservedRunningTime="2026-01-29 11:33:01.793390831 +0000 UTC m=+2047.666425032" Jan 29 11:33:09 crc kubenswrapper[4593]: I0129 11:33:09.833433 4593 generic.go:334] "Generic (PLEG): container finished" podID="b1f286ec-6f85-44c4-94f5-f66bc21c2a64" containerID="12b72897b5f5d11caf6ec17f7553c3a6ceba03b6a70dd8696ec59dda1c8487cb" exitCode=0 Jan 29 11:33:09 crc kubenswrapper[4593]: I0129 11:33:09.833513 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" event={"ID":"b1f286ec-6f85-44c4-94f5-f66bc21c2a64","Type":"ContainerDied","Data":"12b72897b5f5d11caf6ec17f7553c3a6ceba03b6a70dd8696ec59dda1c8487cb"} Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.235433 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.326494 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnhhl\" (UniqueName: \"kubernetes.io/projected/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-kube-api-access-jnhhl\") pod \"b1f286ec-6f85-44c4-94f5-f66bc21c2a64\" (UID: \"b1f286ec-6f85-44c4-94f5-f66bc21c2a64\") " Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.327583 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-ssh-key-openstack-edpm-ipam\") pod \"b1f286ec-6f85-44c4-94f5-f66bc21c2a64\" (UID: \"b1f286ec-6f85-44c4-94f5-f66bc21c2a64\") " Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.327913 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-inventory\") pod \"b1f286ec-6f85-44c4-94f5-f66bc21c2a64\" (UID: \"b1f286ec-6f85-44c4-94f5-f66bc21c2a64\") " Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.338239 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-kube-api-access-jnhhl" (OuterVolumeSpecName: "kube-api-access-jnhhl") pod "b1f286ec-6f85-44c4-94f5-f66bc21c2a64" (UID: "b1f286ec-6f85-44c4-94f5-f66bc21c2a64"). InnerVolumeSpecName "kube-api-access-jnhhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.362107 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b1f286ec-6f85-44c4-94f5-f66bc21c2a64" (UID: "b1f286ec-6f85-44c4-94f5-f66bc21c2a64"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.367355 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-inventory" (OuterVolumeSpecName: "inventory") pod "b1f286ec-6f85-44c4-94f5-f66bc21c2a64" (UID: "b1f286ec-6f85-44c4-94f5-f66bc21c2a64"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.431115 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.431334 4593 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.431440 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnhhl\" (UniqueName: \"kubernetes.io/projected/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-kube-api-access-jnhhl\") on node \"crc\" DevicePath \"\"" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.850310 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" event={"ID":"b1f286ec-6f85-44c4-94f5-f66bc21c2a64","Type":"ContainerDied","Data":"946f49a462d783d56d9cb7915ab170aea3fa4354acdbbab852861c916716c3a4"} Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.850376 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="946f49a462d783d56d9cb7915ab170aea3fa4354acdbbab852861c916716c3a4" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.850681 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.940376 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44"] Jan 29 11:33:11 crc kubenswrapper[4593]: E0129 11:33:11.941288 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1f286ec-6f85-44c4-94f5-f66bc21c2a64" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.941392 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1f286ec-6f85-44c4-94f5-f66bc21c2a64" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.941749 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1f286ec-6f85-44c4-94f5-f66bc21c2a64" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.942702 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.945897 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.946035 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.946610 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w4p8f" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.947769 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.952329 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44"] Jan 29 11:33:12 crc kubenswrapper[4593]: I0129 11:33:12.042099 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a263e61-6654-4030-bd96-c1baa9314111-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jps44\" (UID: \"9a263e61-6654-4030-bd96-c1baa9314111\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" Jan 29 11:33:12 crc kubenswrapper[4593]: I0129 11:33:12.042444 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dp7d\" (UniqueName: \"kubernetes.io/projected/9a263e61-6654-4030-bd96-c1baa9314111-kube-api-access-2dp7d\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jps44\" (UID: \"9a263e61-6654-4030-bd96-c1baa9314111\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" Jan 29 11:33:12 crc kubenswrapper[4593]: I0129 11:33:12.042724 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a263e61-6654-4030-bd96-c1baa9314111-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jps44\" (UID: \"9a263e61-6654-4030-bd96-c1baa9314111\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" Jan 29 11:33:12 crc kubenswrapper[4593]: I0129 11:33:12.144522 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a263e61-6654-4030-bd96-c1baa9314111-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jps44\" (UID: \"9a263e61-6654-4030-bd96-c1baa9314111\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" Jan 29 11:33:12 crc kubenswrapper[4593]: I0129 11:33:12.144618 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a263e61-6654-4030-bd96-c1baa9314111-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jps44\" (UID: \"9a263e61-6654-4030-bd96-c1baa9314111\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" Jan 29 11:33:12 crc kubenswrapper[4593]: I0129 11:33:12.144657 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dp7d\" (UniqueName: \"kubernetes.io/projected/9a263e61-6654-4030-bd96-c1baa9314111-kube-api-access-2dp7d\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jps44\" (UID: \"9a263e61-6654-4030-bd96-c1baa9314111\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" Jan 29 11:33:12 crc kubenswrapper[4593]: I0129 11:33:12.148866 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a263e61-6654-4030-bd96-c1baa9314111-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jps44\" (UID: \"9a263e61-6654-4030-bd96-c1baa9314111\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" Jan 29 11:33:12 crc kubenswrapper[4593]: I0129 11:33:12.153460 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a263e61-6654-4030-bd96-c1baa9314111-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jps44\" (UID: \"9a263e61-6654-4030-bd96-c1baa9314111\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" Jan 29 11:33:12 crc kubenswrapper[4593]: I0129 11:33:12.164959 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dp7d\" (UniqueName: \"kubernetes.io/projected/9a263e61-6654-4030-bd96-c1baa9314111-kube-api-access-2dp7d\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jps44\" (UID: \"9a263e61-6654-4030-bd96-c1baa9314111\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" Jan 29 11:33:12 crc kubenswrapper[4593]: I0129 11:33:12.264355 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" Jan 29 11:33:12 crc kubenswrapper[4593]: I0129 11:33:12.907761 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44"] Jan 29 11:33:13 crc kubenswrapper[4593]: I0129 11:33:13.876350 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" event={"ID":"9a263e61-6654-4030-bd96-c1baa9314111","Type":"ContainerStarted","Data":"c4c21af487b9c0edc57b286f105bf2a456629dead664ba5178ff2d6c7a314a0c"} Jan 29 11:33:13 crc kubenswrapper[4593]: I0129 11:33:13.876722 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" event={"ID":"9a263e61-6654-4030-bd96-c1baa9314111","Type":"ContainerStarted","Data":"d5a36adf4791937de8999978c5b33642cd27043f6bf0df4cfd53332f0acfd5ea"} Jan 29 11:33:22 crc kubenswrapper[4593]: I0129 11:33:22.968618 4593 generic.go:334] "Generic (PLEG): container finished" podID="9a263e61-6654-4030-bd96-c1baa9314111" containerID="c4c21af487b9c0edc57b286f105bf2a456629dead664ba5178ff2d6c7a314a0c" exitCode=0 Jan 29 11:33:22 crc kubenswrapper[4593]: I0129 11:33:22.968819 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" event={"ID":"9a263e61-6654-4030-bd96-c1baa9314111","Type":"ContainerDied","Data":"c4c21af487b9c0edc57b286f105bf2a456629dead664ba5178ff2d6c7a314a0c"} Jan 29 11:33:24 crc kubenswrapper[4593]: I0129 11:33:24.532126 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" Jan 29 11:33:24 crc kubenswrapper[4593]: I0129 11:33:24.591159 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a263e61-6654-4030-bd96-c1baa9314111-ssh-key-openstack-edpm-ipam\") pod \"9a263e61-6654-4030-bd96-c1baa9314111\" (UID: \"9a263e61-6654-4030-bd96-c1baa9314111\") " Jan 29 11:33:24 crc kubenswrapper[4593]: I0129 11:33:24.591681 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a263e61-6654-4030-bd96-c1baa9314111-inventory\") pod \"9a263e61-6654-4030-bd96-c1baa9314111\" (UID: \"9a263e61-6654-4030-bd96-c1baa9314111\") " Jan 29 11:33:24 crc kubenswrapper[4593]: I0129 11:33:24.591846 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dp7d\" (UniqueName: \"kubernetes.io/projected/9a263e61-6654-4030-bd96-c1baa9314111-kube-api-access-2dp7d\") pod \"9a263e61-6654-4030-bd96-c1baa9314111\" (UID: \"9a263e61-6654-4030-bd96-c1baa9314111\") " Jan 29 11:33:24 crc kubenswrapper[4593]: I0129 11:33:24.598701 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a263e61-6654-4030-bd96-c1baa9314111-kube-api-access-2dp7d" (OuterVolumeSpecName: "kube-api-access-2dp7d") pod "9a263e61-6654-4030-bd96-c1baa9314111" (UID: "9a263e61-6654-4030-bd96-c1baa9314111"). InnerVolumeSpecName "kube-api-access-2dp7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:33:24 crc kubenswrapper[4593]: I0129 11:33:24.625194 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a263e61-6654-4030-bd96-c1baa9314111-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9a263e61-6654-4030-bd96-c1baa9314111" (UID: "9a263e61-6654-4030-bd96-c1baa9314111"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:33:24 crc kubenswrapper[4593]: I0129 11:33:24.629542 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a263e61-6654-4030-bd96-c1baa9314111-inventory" (OuterVolumeSpecName: "inventory") pod "9a263e61-6654-4030-bd96-c1baa9314111" (UID: "9a263e61-6654-4030-bd96-c1baa9314111"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:33:24 crc kubenswrapper[4593]: I0129 11:33:24.693955 4593 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a263e61-6654-4030-bd96-c1baa9314111-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 11:33:24 crc kubenswrapper[4593]: I0129 11:33:24.694158 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dp7d\" (UniqueName: \"kubernetes.io/projected/9a263e61-6654-4030-bd96-c1baa9314111-kube-api-access-2dp7d\") on node \"crc\" DevicePath \"\"" Jan 29 11:33:24 crc kubenswrapper[4593]: I0129 11:33:24.694216 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a263e61-6654-4030-bd96-c1baa9314111-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:33:24 crc kubenswrapper[4593]: I0129 11:33:24.991130 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" event={"ID":"9a263e61-6654-4030-bd96-c1baa9314111","Type":"ContainerDied","Data":"d5a36adf4791937de8999978c5b33642cd27043f6bf0df4cfd53332f0acfd5ea"} Jan 29 11:33:24 crc kubenswrapper[4593]: I0129 11:33:24.991444 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5a36adf4791937de8999978c5b33642cd27043f6bf0df4cfd53332f0acfd5ea" Jan 29 11:33:24 crc kubenswrapper[4593]: I0129 11:33:24.991180 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.135511 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68"] Jan 29 11:33:25 crc kubenswrapper[4593]: E0129 11:33:25.136031 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a263e61-6654-4030-bd96-c1baa9314111" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.136058 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a263e61-6654-4030-bd96-c1baa9314111" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.136310 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a263e61-6654-4030-bd96-c1baa9314111" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.138705 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.143353 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.143823 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.144468 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.144587 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w4p8f" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.144511 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.144687 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.144723 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.146008 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.155553 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68"] Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.308116 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.308181 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.308237 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.308275 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.308465 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.308534 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q89hk\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-kube-api-access-q89hk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.308719 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.308855 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.308926 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.309005 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.309054 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.309082 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.309110 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.309133 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.410629 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.410754 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.410788 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.410825 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.410849 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.410899 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.410930 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.410968 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.411783 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.411833 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.411863 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q89hk\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-kube-api-access-q89hk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.411907 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.411989 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.412035 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.415015 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.415488 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.416315 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.416343 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.417137 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.419421 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.420489 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.421405 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.421456 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.422929 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.431453 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.431708 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.433868 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.437112 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q89hk\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-kube-api-access-q89hk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.506136 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:26 crc kubenswrapper[4593]: I0129 11:33:26.023518 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68"] Jan 29 11:33:27 crc kubenswrapper[4593]: I0129 11:33:27.012871 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" event={"ID":"0418390b-7622-490c-ad95-ec5eac075440","Type":"ContainerStarted","Data":"9596acadaeeeff307f766346fb427baede4f5c2973b3737c1943c3387e09ddb5"} Jan 29 11:33:27 crc kubenswrapper[4593]: I0129 11:33:27.013420 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" event={"ID":"0418390b-7622-490c-ad95-ec5eac075440","Type":"ContainerStarted","Data":"ec5d29b14d53bd5f62869f75adcc252c43d91c395941b786e46c53db56831c57"} Jan 29 11:33:27 crc kubenswrapper[4593]: I0129 11:33:27.033743 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" podStartSLOduration=1.6023032050000001 podStartE2EDuration="2.03370377s" podCreationTimestamp="2026-01-29 11:33:25 +0000 UTC" firstStartedPulling="2026-01-29 11:33:26.033178597 +0000 UTC m=+2071.906212788" lastFinishedPulling="2026-01-29 11:33:26.464579162 +0000 UTC m=+2072.337613353" observedRunningTime="2026-01-29 11:33:27.029256079 +0000 UTC m=+2072.902290290" watchObservedRunningTime="2026-01-29 11:33:27.03370377 +0000 UTC m=+2072.906737981" Jan 29 11:34:03 crc kubenswrapper[4593]: I0129 11:34:03.406868 4593 generic.go:334] "Generic (PLEG): container finished" podID="0418390b-7622-490c-ad95-ec5eac075440" containerID="9596acadaeeeff307f766346fb427baede4f5c2973b3737c1943c3387e09ddb5" exitCode=0 Jan 29 11:34:03 crc kubenswrapper[4593]: I0129 11:34:03.407082 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" event={"ID":"0418390b-7622-490c-ad95-ec5eac075440","Type":"ContainerDied","Data":"9596acadaeeeff307f766346fb427baede4f5c2973b3737c1943c3387e09ddb5"} Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.860088 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.906663 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"0418390b-7622-490c-ad95-ec5eac075440\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.906711 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-ovn-combined-ca-bundle\") pod \"0418390b-7622-490c-ad95-ec5eac075440\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.906739 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-bootstrap-combined-ca-bundle\") pod \"0418390b-7622-490c-ad95-ec5eac075440\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.906762 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"0418390b-7622-490c-ad95-ec5eac075440\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.906804 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-libvirt-combined-ca-bundle\") pod \"0418390b-7622-490c-ad95-ec5eac075440\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.906837 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-ovn-default-certs-0\") pod \"0418390b-7622-490c-ad95-ec5eac075440\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.906857 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"0418390b-7622-490c-ad95-ec5eac075440\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.906883 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-ssh-key-openstack-edpm-ipam\") pod \"0418390b-7622-490c-ad95-ec5eac075440\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.906919 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-neutron-metadata-combined-ca-bundle\") pod \"0418390b-7622-490c-ad95-ec5eac075440\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.906957 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-repo-setup-combined-ca-bundle\") pod \"0418390b-7622-490c-ad95-ec5eac075440\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.906975 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-telemetry-combined-ca-bundle\") pod \"0418390b-7622-490c-ad95-ec5eac075440\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.906992 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-nova-combined-ca-bundle\") pod \"0418390b-7622-490c-ad95-ec5eac075440\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.907043 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-inventory\") pod \"0418390b-7622-490c-ad95-ec5eac075440\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.907063 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q89hk\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-kube-api-access-q89hk\") pod \"0418390b-7622-490c-ad95-ec5eac075440\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.916993 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "0418390b-7622-490c-ad95-ec5eac075440" (UID: "0418390b-7622-490c-ad95-ec5eac075440"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.917137 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-kube-api-access-q89hk" (OuterVolumeSpecName: "kube-api-access-q89hk") pod "0418390b-7622-490c-ad95-ec5eac075440" (UID: "0418390b-7622-490c-ad95-ec5eac075440"). InnerVolumeSpecName "kube-api-access-q89hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.917190 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "0418390b-7622-490c-ad95-ec5eac075440" (UID: "0418390b-7622-490c-ad95-ec5eac075440"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.917231 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "0418390b-7622-490c-ad95-ec5eac075440" (UID: "0418390b-7622-490c-ad95-ec5eac075440"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.917458 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "0418390b-7622-490c-ad95-ec5eac075440" (UID: "0418390b-7622-490c-ad95-ec5eac075440"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.917948 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "0418390b-7622-490c-ad95-ec5eac075440" (UID: "0418390b-7622-490c-ad95-ec5eac075440"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.918963 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "0418390b-7622-490c-ad95-ec5eac075440" (UID: "0418390b-7622-490c-ad95-ec5eac075440"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.920785 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "0418390b-7622-490c-ad95-ec5eac075440" (UID: "0418390b-7622-490c-ad95-ec5eac075440"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.925273 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "0418390b-7622-490c-ad95-ec5eac075440" (UID: "0418390b-7622-490c-ad95-ec5eac075440"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.927340 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "0418390b-7622-490c-ad95-ec5eac075440" (UID: "0418390b-7622-490c-ad95-ec5eac075440"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.928771 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "0418390b-7622-490c-ad95-ec5eac075440" (UID: "0418390b-7622-490c-ad95-ec5eac075440"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.938105 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "0418390b-7622-490c-ad95-ec5eac075440" (UID: "0418390b-7622-490c-ad95-ec5eac075440"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.952326 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0418390b-7622-490c-ad95-ec5eac075440" (UID: "0418390b-7622-490c-ad95-ec5eac075440"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.961204 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-inventory" (OuterVolumeSpecName: "inventory") pod "0418390b-7622-490c-ad95-ec5eac075440" (UID: "0418390b-7622-490c-ad95-ec5eac075440"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.010123 4593 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.010356 4593 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.010505 4593 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.010605 4593 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.010723 4593 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.010822 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q89hk\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-kube-api-access-q89hk\") on node \"crc\" DevicePath \"\"" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.010932 4593 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.011028 4593 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.011208 4593 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.011316 4593 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.011416 4593 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.011567 4593 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.011701 4593 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.011819 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.432396 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" event={"ID":"0418390b-7622-490c-ad95-ec5eac075440","Type":"ContainerDied","Data":"ec5d29b14d53bd5f62869f75adcc252c43d91c395941b786e46c53db56831c57"} Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.432694 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec5d29b14d53bd5f62869f75adcc252c43d91c395941b786e46c53db56831c57" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.432469 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.559687 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl"] Jan 29 11:34:05 crc kubenswrapper[4593]: E0129 11:34:05.560377 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0418390b-7622-490c-ad95-ec5eac075440" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.560503 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="0418390b-7622-490c-ad95-ec5eac075440" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.560850 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="0418390b-7622-490c-ad95-ec5eac075440" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.561656 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.564916 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.567762 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.568210 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w4p8f" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.568444 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.568876 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.585129 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl"] Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.621916 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ftxjl\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.621988 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nkgg\" (UniqueName: \"kubernetes.io/projected/80db2d7c-94e6-418b-a0b4-2b4064356e4b-kube-api-access-7nkgg\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ftxjl\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.622013 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ftxjl\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.622069 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ftxjl\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.622122 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ftxjl\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.723557 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ftxjl\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.723715 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nkgg\" (UniqueName: \"kubernetes.io/projected/80db2d7c-94e6-418b-a0b4-2b4064356e4b-kube-api-access-7nkgg\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ftxjl\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.723755 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ftxjl\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.723858 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ftxjl\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.723951 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ftxjl\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.725403 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ftxjl\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.731057 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ftxjl\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.732036 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ftxjl\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.732821 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ftxjl\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.755655 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nkgg\" (UniqueName: \"kubernetes.io/projected/80db2d7c-94e6-418b-a0b4-2b4064356e4b-kube-api-access-7nkgg\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ftxjl\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.883585 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:06 crc kubenswrapper[4593]: I0129 11:34:06.486056 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl"] Jan 29 11:34:07 crc kubenswrapper[4593]: I0129 11:34:07.457556 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" event={"ID":"80db2d7c-94e6-418b-a0b4-2b4064356e4b","Type":"ContainerStarted","Data":"3d1b42f49400161b1d8c95796bd799e62ffe6e307b7fcee26199ead4efaeeb5f"} Jan 29 11:34:07 crc kubenswrapper[4593]: I0129 11:34:07.457973 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" event={"ID":"80db2d7c-94e6-418b-a0b4-2b4064356e4b","Type":"ContainerStarted","Data":"7b36f3307cde3252ef687db46ed25297713e29f6036f5d4211d41f1c07171c14"} Jan 29 11:34:07 crc kubenswrapper[4593]: I0129 11:34:07.481410 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" podStartSLOduration=2.041449705 podStartE2EDuration="2.481372517s" podCreationTimestamp="2026-01-29 11:34:05 +0000 UTC" firstStartedPulling="2026-01-29 11:34:06.483143997 +0000 UTC m=+2112.356178188" lastFinishedPulling="2026-01-29 11:34:06.923066809 +0000 UTC m=+2112.796101000" observedRunningTime="2026-01-29 11:34:07.477738569 +0000 UTC m=+2113.350772790" watchObservedRunningTime="2026-01-29 11:34:07.481372517 +0000 UTC m=+2113.354406728" Jan 29 11:34:33 crc kubenswrapper[4593]: I0129 11:34:33.946548 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:34:33 crc kubenswrapper[4593]: I0129 11:34:33.947209 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:34:57 crc kubenswrapper[4593]: I0129 11:34:57.032944 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5nlmk"] Jan 29 11:34:57 crc kubenswrapper[4593]: I0129 11:34:57.035570 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:34:57 crc kubenswrapper[4593]: I0129 11:34:57.063816 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5nlmk"] Jan 29 11:34:57 crc kubenswrapper[4593]: I0129 11:34:57.160930 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b68z\" (UniqueName: \"kubernetes.io/projected/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-kube-api-access-4b68z\") pod \"community-operators-5nlmk\" (UID: \"67c4381e-f9c8-4453-8680-3ee5fab8d1f2\") " pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:34:57 crc kubenswrapper[4593]: I0129 11:34:57.161171 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-catalog-content\") pod \"community-operators-5nlmk\" (UID: \"67c4381e-f9c8-4453-8680-3ee5fab8d1f2\") " pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:34:57 crc kubenswrapper[4593]: I0129 11:34:57.161800 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-utilities\") pod \"community-operators-5nlmk\" (UID: \"67c4381e-f9c8-4453-8680-3ee5fab8d1f2\") " pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:34:57 crc kubenswrapper[4593]: I0129 11:34:57.263603 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-utilities\") pod \"community-operators-5nlmk\" (UID: \"67c4381e-f9c8-4453-8680-3ee5fab8d1f2\") " pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:34:57 crc kubenswrapper[4593]: I0129 11:34:57.264001 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b68z\" (UniqueName: \"kubernetes.io/projected/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-kube-api-access-4b68z\") pod \"community-operators-5nlmk\" (UID: \"67c4381e-f9c8-4453-8680-3ee5fab8d1f2\") " pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:34:57 crc kubenswrapper[4593]: I0129 11:34:57.264074 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-catalog-content\") pod \"community-operators-5nlmk\" (UID: \"67c4381e-f9c8-4453-8680-3ee5fab8d1f2\") " pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:34:57 crc kubenswrapper[4593]: I0129 11:34:57.264262 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-utilities\") pod \"community-operators-5nlmk\" (UID: \"67c4381e-f9c8-4453-8680-3ee5fab8d1f2\") " pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:34:57 crc kubenswrapper[4593]: I0129 11:34:57.264530 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-catalog-content\") pod \"community-operators-5nlmk\" (UID: \"67c4381e-f9c8-4453-8680-3ee5fab8d1f2\") " pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:34:57 crc kubenswrapper[4593]: I0129 11:34:57.302187 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b68z\" (UniqueName: \"kubernetes.io/projected/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-kube-api-access-4b68z\") pod \"community-operators-5nlmk\" (UID: \"67c4381e-f9c8-4453-8680-3ee5fab8d1f2\") " pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:34:57 crc kubenswrapper[4593]: I0129 11:34:57.374913 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:34:57 crc kubenswrapper[4593]: I0129 11:34:57.980272 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5nlmk"] Jan 29 11:34:57 crc kubenswrapper[4593]: W0129 11:34:57.993643 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67c4381e_f9c8_4453_8680_3ee5fab8d1f2.slice/crio-f9a56a74fc7daa58d106bd12a56a8706dbae0e26b7157708545017068760372e WatchSource:0}: Error finding container f9a56a74fc7daa58d106bd12a56a8706dbae0e26b7157708545017068760372e: Status 404 returned error can't find the container with id f9a56a74fc7daa58d106bd12a56a8706dbae0e26b7157708545017068760372e Jan 29 11:34:59 crc kubenswrapper[4593]: I0129 11:34:59.010975 4593 generic.go:334] "Generic (PLEG): container finished" podID="67c4381e-f9c8-4453-8680-3ee5fab8d1f2" containerID="7daf072a473270a9342ce76b469637775fe6f66141ab7e5229ae058d21e5a6ff" exitCode=0 Jan 29 11:34:59 crc kubenswrapper[4593]: I0129 11:34:59.011101 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nlmk" event={"ID":"67c4381e-f9c8-4453-8680-3ee5fab8d1f2","Type":"ContainerDied","Data":"7daf072a473270a9342ce76b469637775fe6f66141ab7e5229ae058d21e5a6ff"} Jan 29 11:34:59 crc kubenswrapper[4593]: I0129 11:34:59.011361 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nlmk" event={"ID":"67c4381e-f9c8-4453-8680-3ee5fab8d1f2","Type":"ContainerStarted","Data":"f9a56a74fc7daa58d106bd12a56a8706dbae0e26b7157708545017068760372e"} Jan 29 11:34:59 crc kubenswrapper[4593]: I0129 11:34:59.013130 4593 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 11:35:01 crc kubenswrapper[4593]: I0129 11:35:01.028851 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nlmk" event={"ID":"67c4381e-f9c8-4453-8680-3ee5fab8d1f2","Type":"ContainerStarted","Data":"74d480ab83f22e68cd7c435ee8e14833d75c66ec9e6a546ce97fe66211ff3711"} Jan 29 11:35:03 crc kubenswrapper[4593]: I0129 11:35:03.946055 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:35:03 crc kubenswrapper[4593]: I0129 11:35:03.946652 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:35:04 crc kubenswrapper[4593]: I0129 11:35:04.063507 4593 generic.go:334] "Generic (PLEG): container finished" podID="67c4381e-f9c8-4453-8680-3ee5fab8d1f2" containerID="74d480ab83f22e68cd7c435ee8e14833d75c66ec9e6a546ce97fe66211ff3711" exitCode=0 Jan 29 11:35:04 crc kubenswrapper[4593]: I0129 11:35:04.063568 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nlmk" event={"ID":"67c4381e-f9c8-4453-8680-3ee5fab8d1f2","Type":"ContainerDied","Data":"74d480ab83f22e68cd7c435ee8e14833d75c66ec9e6a546ce97fe66211ff3711"} Jan 29 11:35:05 crc kubenswrapper[4593]: I0129 11:35:05.086392 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nlmk" event={"ID":"67c4381e-f9c8-4453-8680-3ee5fab8d1f2","Type":"ContainerStarted","Data":"a016b50eba882b51d1fbb5638a8d6078af89cdbdc31aa8d9358399358bfcfa8e"} Jan 29 11:35:05 crc kubenswrapper[4593]: I0129 11:35:05.106358 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5nlmk" podStartSLOduration=2.400794017 podStartE2EDuration="8.106338155s" podCreationTimestamp="2026-01-29 11:34:57 +0000 UTC" firstStartedPulling="2026-01-29 11:34:59.012861445 +0000 UTC m=+2164.885895636" lastFinishedPulling="2026-01-29 11:35:04.718405563 +0000 UTC m=+2170.591439774" observedRunningTime="2026-01-29 11:35:05.106317864 +0000 UTC m=+2170.979352055" watchObservedRunningTime="2026-01-29 11:35:05.106338155 +0000 UTC m=+2170.979372346" Jan 29 11:35:07 crc kubenswrapper[4593]: I0129 11:35:07.376410 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:35:07 crc kubenswrapper[4593]: I0129 11:35:07.376830 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:35:08 crc kubenswrapper[4593]: I0129 11:35:08.424067 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-5nlmk" podUID="67c4381e-f9c8-4453-8680-3ee5fab8d1f2" containerName="registry-server" probeResult="failure" output=< Jan 29 11:35:08 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:35:08 crc kubenswrapper[4593]: > Jan 29 11:35:16 crc kubenswrapper[4593]: I0129 11:35:16.177512 4593 generic.go:334] "Generic (PLEG): container finished" podID="80db2d7c-94e6-418b-a0b4-2b4064356e4b" containerID="3d1b42f49400161b1d8c95796bd799e62ffe6e307b7fcee26199ead4efaeeb5f" exitCode=0 Jan 29 11:35:16 crc kubenswrapper[4593]: I0129 11:35:16.178291 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" event={"ID":"80db2d7c-94e6-418b-a0b4-2b4064356e4b","Type":"ContainerDied","Data":"3d1b42f49400161b1d8c95796bd799e62ffe6e307b7fcee26199ead4efaeeb5f"} Jan 29 11:35:17 crc kubenswrapper[4593]: I0129 11:35:17.536965 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:35:17 crc kubenswrapper[4593]: I0129 11:35:17.668771 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:35:17 crc kubenswrapper[4593]: I0129 11:35:17.761297 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:35:17 crc kubenswrapper[4593]: I0129 11:35:17.923548 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ovn-combined-ca-bundle\") pod \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " Jan 29 11:35:17 crc kubenswrapper[4593]: I0129 11:35:17.923886 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ssh-key-openstack-edpm-ipam\") pod \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " Jan 29 11:35:17 crc kubenswrapper[4593]: I0129 11:35:17.923908 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-inventory\") pod \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " Jan 29 11:35:17 crc kubenswrapper[4593]: I0129 11:35:17.924078 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nkgg\" (UniqueName: \"kubernetes.io/projected/80db2d7c-94e6-418b-a0b4-2b4064356e4b-kube-api-access-7nkgg\") pod \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " Jan 29 11:35:17 crc kubenswrapper[4593]: I0129 11:35:17.924150 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ovncontroller-config-0\") pod \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " Jan 29 11:35:17 crc kubenswrapper[4593]: I0129 11:35:17.952825 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "80db2d7c-94e6-418b-a0b4-2b4064356e4b" (UID: "80db2d7c-94e6-418b-a0b4-2b4064356e4b"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:35:17 crc kubenswrapper[4593]: I0129 11:35:17.960857 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80db2d7c-94e6-418b-a0b4-2b4064356e4b-kube-api-access-7nkgg" (OuterVolumeSpecName: "kube-api-access-7nkgg") pod "80db2d7c-94e6-418b-a0b4-2b4064356e4b" (UID: "80db2d7c-94e6-418b-a0b4-2b4064356e4b"). InnerVolumeSpecName "kube-api-access-7nkgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:35:17 crc kubenswrapper[4593]: I0129 11:35:17.971338 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "80db2d7c-94e6-418b-a0b4-2b4064356e4b" (UID: "80db2d7c-94e6-418b-a0b4-2b4064356e4b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:35:17 crc kubenswrapper[4593]: I0129 11:35:17.971555 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-inventory" (OuterVolumeSpecName: "inventory") pod "80db2d7c-94e6-418b-a0b4-2b4064356e4b" (UID: "80db2d7c-94e6-418b-a0b4-2b4064356e4b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:35:17 crc kubenswrapper[4593]: I0129 11:35:17.972417 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "80db2d7c-94e6-418b-a0b4-2b4064356e4b" (UID: "80db2d7c-94e6-418b-a0b4-2b4064356e4b"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.026445 4593 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.026494 4593 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.026506 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.026521 4593 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.026532 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nkgg\" (UniqueName: \"kubernetes.io/projected/80db2d7c-94e6-418b-a0b4-2b4064356e4b-kube-api-access-7nkgg\") on node \"crc\" DevicePath \"\"" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.202351 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" event={"ID":"80db2d7c-94e6-418b-a0b4-2b4064356e4b","Type":"ContainerDied","Data":"7b36f3307cde3252ef687db46ed25297713e29f6036f5d4211d41f1c07171c14"} Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.202398 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b36f3307cde3252ef687db46ed25297713e29f6036f5d4211d41f1c07171c14" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.202719 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.411975 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct"] Jan 29 11:35:18 crc kubenswrapper[4593]: E0129 11:35:18.412455 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80db2d7c-94e6-418b-a0b4-2b4064356e4b" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.412487 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="80db2d7c-94e6-418b-a0b4-2b4064356e4b" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.412805 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="80db2d7c-94e6-418b-a0b4-2b4064356e4b" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.413610 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.415331 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.415393 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.415905 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.416048 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.416829 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w4p8f" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.417064 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.445166 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct"] Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.534240 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxpkb\" (UniqueName: \"kubernetes.io/projected/4c7cff3f-040a-4499-825c-3cccd015326a-kube-api-access-gxpkb\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.534510 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.534592 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.534659 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.534690 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.534764 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.636224 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.636283 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.636311 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.636333 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.636368 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.636456 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxpkb\" (UniqueName: \"kubernetes.io/projected/4c7cff3f-040a-4499-825c-3cccd015326a-kube-api-access-gxpkb\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.641254 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.641410 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.641616 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.642537 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.643144 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.661171 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxpkb\" (UniqueName: \"kubernetes.io/projected/4c7cff3f-040a-4499-825c-3cccd015326a-kube-api-access-gxpkb\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.733236 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:19 crc kubenswrapper[4593]: I0129 11:35:19.296250 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct"] Jan 29 11:35:20 crc kubenswrapper[4593]: I0129 11:35:20.220611 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" event={"ID":"4c7cff3f-040a-4499-825c-3cccd015326a","Type":"ContainerStarted","Data":"3d1228225b6ffd897296a865f985eb25440e60005ab6ac0ae135485a6d691258"} Jan 29 11:35:20 crc kubenswrapper[4593]: I0129 11:35:20.221120 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" event={"ID":"4c7cff3f-040a-4499-825c-3cccd015326a","Type":"ContainerStarted","Data":"b6601232a02e3d92b3cca5f75209114738f2a4a3ccaef37ffa707cfb7625bc91"} Jan 29 11:35:20 crc kubenswrapper[4593]: I0129 11:35:20.246900 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" podStartSLOduration=1.7392019140000001 podStartE2EDuration="2.246874561s" podCreationTimestamp="2026-01-29 11:35:18 +0000 UTC" firstStartedPulling="2026-01-29 11:35:19.302714047 +0000 UTC m=+2185.175748238" lastFinishedPulling="2026-01-29 11:35:19.810386694 +0000 UTC m=+2185.683420885" observedRunningTime="2026-01-29 11:35:20.237948199 +0000 UTC m=+2186.110982410" watchObservedRunningTime="2026-01-29 11:35:20.246874561 +0000 UTC m=+2186.119908762" Jan 29 11:35:20 crc kubenswrapper[4593]: I0129 11:35:20.529298 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5nlmk"] Jan 29 11:35:20 crc kubenswrapper[4593]: I0129 11:35:20.529999 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5nlmk" podUID="67c4381e-f9c8-4453-8680-3ee5fab8d1f2" containerName="registry-server" containerID="cri-o://a016b50eba882b51d1fbb5638a8d6078af89cdbdc31aa8d9358399358bfcfa8e" gracePeriod=2 Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.002462 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.094603 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4b68z\" (UniqueName: \"kubernetes.io/projected/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-kube-api-access-4b68z\") pod \"67c4381e-f9c8-4453-8680-3ee5fab8d1f2\" (UID: \"67c4381e-f9c8-4453-8680-3ee5fab8d1f2\") " Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.094720 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-catalog-content\") pod \"67c4381e-f9c8-4453-8680-3ee5fab8d1f2\" (UID: \"67c4381e-f9c8-4453-8680-3ee5fab8d1f2\") " Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.094857 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-utilities\") pod \"67c4381e-f9c8-4453-8680-3ee5fab8d1f2\" (UID: \"67c4381e-f9c8-4453-8680-3ee5fab8d1f2\") " Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.095796 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-utilities" (OuterVolumeSpecName: "utilities") pod "67c4381e-f9c8-4453-8680-3ee5fab8d1f2" (UID: "67c4381e-f9c8-4453-8680-3ee5fab8d1f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.117323 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-kube-api-access-4b68z" (OuterVolumeSpecName: "kube-api-access-4b68z") pod "67c4381e-f9c8-4453-8680-3ee5fab8d1f2" (UID: "67c4381e-f9c8-4453-8680-3ee5fab8d1f2"). InnerVolumeSpecName "kube-api-access-4b68z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.152389 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67c4381e-f9c8-4453-8680-3ee5fab8d1f2" (UID: "67c4381e-f9c8-4453-8680-3ee5fab8d1f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.197618 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4b68z\" (UniqueName: \"kubernetes.io/projected/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-kube-api-access-4b68z\") on node \"crc\" DevicePath \"\"" Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.197664 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.197678 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.230276 4593 generic.go:334] "Generic (PLEG): container finished" podID="67c4381e-f9c8-4453-8680-3ee5fab8d1f2" containerID="a016b50eba882b51d1fbb5638a8d6078af89cdbdc31aa8d9358399358bfcfa8e" exitCode=0 Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.231138 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.233773 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nlmk" event={"ID":"67c4381e-f9c8-4453-8680-3ee5fab8d1f2","Type":"ContainerDied","Data":"a016b50eba882b51d1fbb5638a8d6078af89cdbdc31aa8d9358399358bfcfa8e"} Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.233817 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nlmk" event={"ID":"67c4381e-f9c8-4453-8680-3ee5fab8d1f2","Type":"ContainerDied","Data":"f9a56a74fc7daa58d106bd12a56a8706dbae0e26b7157708545017068760372e"} Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.233842 4593 scope.go:117] "RemoveContainer" containerID="a016b50eba882b51d1fbb5638a8d6078af89cdbdc31aa8d9358399358bfcfa8e" Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.272231 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5nlmk"] Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.272966 4593 scope.go:117] "RemoveContainer" containerID="74d480ab83f22e68cd7c435ee8e14833d75c66ec9e6a546ce97fe66211ff3711" Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.280651 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5nlmk"] Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.294054 4593 scope.go:117] "RemoveContainer" containerID="7daf072a473270a9342ce76b469637775fe6f66141ab7e5229ae058d21e5a6ff" Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.352722 4593 scope.go:117] "RemoveContainer" containerID="a016b50eba882b51d1fbb5638a8d6078af89cdbdc31aa8d9358399358bfcfa8e" Jan 29 11:35:21 crc kubenswrapper[4593]: E0129 11:35:21.353256 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a016b50eba882b51d1fbb5638a8d6078af89cdbdc31aa8d9358399358bfcfa8e\": container with ID starting with a016b50eba882b51d1fbb5638a8d6078af89cdbdc31aa8d9358399358bfcfa8e not found: ID does not exist" containerID="a016b50eba882b51d1fbb5638a8d6078af89cdbdc31aa8d9358399358bfcfa8e" Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.353300 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a016b50eba882b51d1fbb5638a8d6078af89cdbdc31aa8d9358399358bfcfa8e"} err="failed to get container status \"a016b50eba882b51d1fbb5638a8d6078af89cdbdc31aa8d9358399358bfcfa8e\": rpc error: code = NotFound desc = could not find container \"a016b50eba882b51d1fbb5638a8d6078af89cdbdc31aa8d9358399358bfcfa8e\": container with ID starting with a016b50eba882b51d1fbb5638a8d6078af89cdbdc31aa8d9358399358bfcfa8e not found: ID does not exist" Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.353329 4593 scope.go:117] "RemoveContainer" containerID="74d480ab83f22e68cd7c435ee8e14833d75c66ec9e6a546ce97fe66211ff3711" Jan 29 11:35:21 crc kubenswrapper[4593]: E0129 11:35:21.353788 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74d480ab83f22e68cd7c435ee8e14833d75c66ec9e6a546ce97fe66211ff3711\": container with ID starting with 74d480ab83f22e68cd7c435ee8e14833d75c66ec9e6a546ce97fe66211ff3711 not found: ID does not exist" containerID="74d480ab83f22e68cd7c435ee8e14833d75c66ec9e6a546ce97fe66211ff3711" Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.353817 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74d480ab83f22e68cd7c435ee8e14833d75c66ec9e6a546ce97fe66211ff3711"} err="failed to get container status \"74d480ab83f22e68cd7c435ee8e14833d75c66ec9e6a546ce97fe66211ff3711\": rpc error: code = NotFound desc = could not find container \"74d480ab83f22e68cd7c435ee8e14833d75c66ec9e6a546ce97fe66211ff3711\": container with ID starting with 74d480ab83f22e68cd7c435ee8e14833d75c66ec9e6a546ce97fe66211ff3711 not found: ID does not exist" Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.353842 4593 scope.go:117] "RemoveContainer" containerID="7daf072a473270a9342ce76b469637775fe6f66141ab7e5229ae058d21e5a6ff" Jan 29 11:35:21 crc kubenswrapper[4593]: E0129 11:35:21.354138 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7daf072a473270a9342ce76b469637775fe6f66141ab7e5229ae058d21e5a6ff\": container with ID starting with 7daf072a473270a9342ce76b469637775fe6f66141ab7e5229ae058d21e5a6ff not found: ID does not exist" containerID="7daf072a473270a9342ce76b469637775fe6f66141ab7e5229ae058d21e5a6ff" Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.354161 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7daf072a473270a9342ce76b469637775fe6f66141ab7e5229ae058d21e5a6ff"} err="failed to get container status \"7daf072a473270a9342ce76b469637775fe6f66141ab7e5229ae058d21e5a6ff\": rpc error: code = NotFound desc = could not find container \"7daf072a473270a9342ce76b469637775fe6f66141ab7e5229ae058d21e5a6ff\": container with ID starting with 7daf072a473270a9342ce76b469637775fe6f66141ab7e5229ae058d21e5a6ff not found: ID does not exist" Jan 29 11:35:23 crc kubenswrapper[4593]: I0129 11:35:23.089117 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67c4381e-f9c8-4453-8680-3ee5fab8d1f2" path="/var/lib/kubelet/pods/67c4381e-f9c8-4453-8680-3ee5fab8d1f2/volumes" Jan 29 11:35:33 crc kubenswrapper[4593]: I0129 11:35:33.946598 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:35:33 crc kubenswrapper[4593]: I0129 11:35:33.947162 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:35:33 crc kubenswrapper[4593]: I0129 11:35:33.947241 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 11:35:33 crc kubenswrapper[4593]: I0129 11:35:33.948163 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"466a93f4cbc41eff7fb78889db6079a8dd1f4541d541aedd9f60554c729b2972"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:35:33 crc kubenswrapper[4593]: I0129 11:35:33.948321 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://466a93f4cbc41eff7fb78889db6079a8dd1f4541d541aedd9f60554c729b2972" gracePeriod=600 Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.357695 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="466a93f4cbc41eff7fb78889db6079a8dd1f4541d541aedd9f60554c729b2972" exitCode=0 Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.357740 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"466a93f4cbc41eff7fb78889db6079a8dd1f4541d541aedd9f60554c729b2972"} Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.357784 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.365064 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7ftts"] Jan 29 11:35:34 crc kubenswrapper[4593]: E0129 11:35:34.365479 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c4381e-f9c8-4453-8680-3ee5fab8d1f2" containerName="registry-server" Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.365501 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c4381e-f9c8-4453-8680-3ee5fab8d1f2" containerName="registry-server" Jan 29 11:35:34 crc kubenswrapper[4593]: E0129 11:35:34.365511 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c4381e-f9c8-4453-8680-3ee5fab8d1f2" containerName="extract-utilities" Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.365517 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c4381e-f9c8-4453-8680-3ee5fab8d1f2" containerName="extract-utilities" Jan 29 11:35:34 crc kubenswrapper[4593]: E0129 11:35:34.365541 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c4381e-f9c8-4453-8680-3ee5fab8d1f2" containerName="extract-content" Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.365547 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c4381e-f9c8-4453-8680-3ee5fab8d1f2" containerName="extract-content" Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.365732 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="67c4381e-f9c8-4453-8680-3ee5fab8d1f2" containerName="registry-server" Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.367018 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.379233 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ftts"] Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.475549 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7vdv\" (UniqueName: \"kubernetes.io/projected/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-kube-api-access-w7vdv\") pod \"redhat-marketplace-7ftts\" (UID: \"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4\") " pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.475704 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-catalog-content\") pod \"redhat-marketplace-7ftts\" (UID: \"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4\") " pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.475760 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-utilities\") pod \"redhat-marketplace-7ftts\" (UID: \"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4\") " pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.577106 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-catalog-content\") pod \"redhat-marketplace-7ftts\" (UID: \"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4\") " pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.577380 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-utilities\") pod \"redhat-marketplace-7ftts\" (UID: \"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4\") " pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.577546 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7vdv\" (UniqueName: \"kubernetes.io/projected/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-kube-api-access-w7vdv\") pod \"redhat-marketplace-7ftts\" (UID: \"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4\") " pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.577762 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-catalog-content\") pod \"redhat-marketplace-7ftts\" (UID: \"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4\") " pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.577864 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-utilities\") pod \"redhat-marketplace-7ftts\" (UID: \"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4\") " pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.605294 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7vdv\" (UniqueName: \"kubernetes.io/projected/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-kube-api-access-w7vdv\") pod \"redhat-marketplace-7ftts\" (UID: \"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4\") " pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.691088 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:35 crc kubenswrapper[4593]: I0129 11:35:35.178299 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ftts"] Jan 29 11:35:35 crc kubenswrapper[4593]: I0129 11:35:35.367771 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ftts" event={"ID":"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4","Type":"ContainerStarted","Data":"4b06ea4e929072566d99822da48350f4d7a6964940570100bca4e50927cfff13"} Jan 29 11:35:37 crc kubenswrapper[4593]: I0129 11:35:37.388548 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed"} Jan 29 11:35:37 crc kubenswrapper[4593]: I0129 11:35:37.390887 4593 generic.go:334] "Generic (PLEG): container finished" podID="1ac08e15-d0dc-4f0e-8704-c1ab168d73c4" containerID="aca2b65b9701618df8d690cfd050401d6943822d0cdac68f29815b3060b3d6ff" exitCode=0 Jan 29 11:35:37 crc kubenswrapper[4593]: I0129 11:35:37.390927 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ftts" event={"ID":"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4","Type":"ContainerDied","Data":"aca2b65b9701618df8d690cfd050401d6943822d0cdac68f29815b3060b3d6ff"} Jan 29 11:35:38 crc kubenswrapper[4593]: I0129 11:35:38.404428 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ftts" event={"ID":"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4","Type":"ContainerStarted","Data":"99be25b109404cd94d778893b108553c39d2f07fdff700b7b78ba209ae8de92c"} Jan 29 11:35:40 crc kubenswrapper[4593]: I0129 11:35:40.446932 4593 generic.go:334] "Generic (PLEG): container finished" podID="1ac08e15-d0dc-4f0e-8704-c1ab168d73c4" containerID="99be25b109404cd94d778893b108553c39d2f07fdff700b7b78ba209ae8de92c" exitCode=0 Jan 29 11:35:40 crc kubenswrapper[4593]: I0129 11:35:40.447019 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ftts" event={"ID":"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4","Type":"ContainerDied","Data":"99be25b109404cd94d778893b108553c39d2f07fdff700b7b78ba209ae8de92c"} Jan 29 11:35:41 crc kubenswrapper[4593]: I0129 11:35:41.459458 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ftts" event={"ID":"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4","Type":"ContainerStarted","Data":"1fdbda5d794976022e78313b0e20be9d6d59becb29f897ce47612e2cb3f08744"} Jan 29 11:35:41 crc kubenswrapper[4593]: I0129 11:35:41.487805 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7ftts" podStartSLOduration=3.85781173 podStartE2EDuration="7.487779324s" podCreationTimestamp="2026-01-29 11:35:34 +0000 UTC" firstStartedPulling="2026-01-29 11:35:37.392200392 +0000 UTC m=+2203.265234583" lastFinishedPulling="2026-01-29 11:35:41.022167986 +0000 UTC m=+2206.895202177" observedRunningTime="2026-01-29 11:35:41.479036317 +0000 UTC m=+2207.352070518" watchObservedRunningTime="2026-01-29 11:35:41.487779324 +0000 UTC m=+2207.360813515" Jan 29 11:35:44 crc kubenswrapper[4593]: I0129 11:35:44.691590 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:44 crc kubenswrapper[4593]: I0129 11:35:44.693079 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:44 crc kubenswrapper[4593]: I0129 11:35:44.749293 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:46 crc kubenswrapper[4593]: I0129 11:35:46.558151 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:46 crc kubenswrapper[4593]: I0129 11:35:46.622912 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ftts"] Jan 29 11:35:48 crc kubenswrapper[4593]: I0129 11:35:48.518422 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7ftts" podUID="1ac08e15-d0dc-4f0e-8704-c1ab168d73c4" containerName="registry-server" containerID="cri-o://1fdbda5d794976022e78313b0e20be9d6d59becb29f897ce47612e2cb3f08744" gracePeriod=2 Jan 29 11:35:48 crc kubenswrapper[4593]: I0129 11:35:48.975130 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.120281 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-catalog-content\") pod \"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4\" (UID: \"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4\") " Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.122123 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-utilities\") pod \"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4\" (UID: \"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4\") " Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.123848 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-utilities" (OuterVolumeSpecName: "utilities") pod "1ac08e15-d0dc-4f0e-8704-c1ab168d73c4" (UID: "1ac08e15-d0dc-4f0e-8704-c1ab168d73c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.124836 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7vdv\" (UniqueName: \"kubernetes.io/projected/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-kube-api-access-w7vdv\") pod \"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4\" (UID: \"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4\") " Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.128641 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.137296 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-kube-api-access-w7vdv" (OuterVolumeSpecName: "kube-api-access-w7vdv") pod "1ac08e15-d0dc-4f0e-8704-c1ab168d73c4" (UID: "1ac08e15-d0dc-4f0e-8704-c1ab168d73c4"). InnerVolumeSpecName "kube-api-access-w7vdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.150989 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ac08e15-d0dc-4f0e-8704-c1ab168d73c4" (UID: "1ac08e15-d0dc-4f0e-8704-c1ab168d73c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.232101 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7vdv\" (UniqueName: \"kubernetes.io/projected/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-kube-api-access-w7vdv\") on node \"crc\" DevicePath \"\"" Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.232139 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.531585 4593 generic.go:334] "Generic (PLEG): container finished" podID="1ac08e15-d0dc-4f0e-8704-c1ab168d73c4" containerID="1fdbda5d794976022e78313b0e20be9d6d59becb29f897ce47612e2cb3f08744" exitCode=0 Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.531620 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ftts" event={"ID":"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4","Type":"ContainerDied","Data":"1fdbda5d794976022e78313b0e20be9d6d59becb29f897ce47612e2cb3f08744"} Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.531660 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ftts" event={"ID":"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4","Type":"ContainerDied","Data":"4b06ea4e929072566d99822da48350f4d7a6964940570100bca4e50927cfff13"} Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.531679 4593 scope.go:117] "RemoveContainer" containerID="1fdbda5d794976022e78313b0e20be9d6d59becb29f897ce47612e2cb3f08744" Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.531735 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.580303 4593 scope.go:117] "RemoveContainer" containerID="99be25b109404cd94d778893b108553c39d2f07fdff700b7b78ba209ae8de92c" Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.588590 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ftts"] Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.607436 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ftts"] Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.613485 4593 scope.go:117] "RemoveContainer" containerID="aca2b65b9701618df8d690cfd050401d6943822d0cdac68f29815b3060b3d6ff" Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.655686 4593 scope.go:117] "RemoveContainer" containerID="1fdbda5d794976022e78313b0e20be9d6d59becb29f897ce47612e2cb3f08744" Jan 29 11:35:49 crc kubenswrapper[4593]: E0129 11:35:49.656454 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fdbda5d794976022e78313b0e20be9d6d59becb29f897ce47612e2cb3f08744\": container with ID starting with 1fdbda5d794976022e78313b0e20be9d6d59becb29f897ce47612e2cb3f08744 not found: ID does not exist" containerID="1fdbda5d794976022e78313b0e20be9d6d59becb29f897ce47612e2cb3f08744" Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.656594 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fdbda5d794976022e78313b0e20be9d6d59becb29f897ce47612e2cb3f08744"} err="failed to get container status \"1fdbda5d794976022e78313b0e20be9d6d59becb29f897ce47612e2cb3f08744\": rpc error: code = NotFound desc = could not find container \"1fdbda5d794976022e78313b0e20be9d6d59becb29f897ce47612e2cb3f08744\": container with ID starting with 1fdbda5d794976022e78313b0e20be9d6d59becb29f897ce47612e2cb3f08744 not found: ID does not exist" Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.656743 4593 scope.go:117] "RemoveContainer" containerID="99be25b109404cd94d778893b108553c39d2f07fdff700b7b78ba209ae8de92c" Jan 29 11:35:49 crc kubenswrapper[4593]: E0129 11:35:49.657368 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99be25b109404cd94d778893b108553c39d2f07fdff700b7b78ba209ae8de92c\": container with ID starting with 99be25b109404cd94d778893b108553c39d2f07fdff700b7b78ba209ae8de92c not found: ID does not exist" containerID="99be25b109404cd94d778893b108553c39d2f07fdff700b7b78ba209ae8de92c" Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.657512 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99be25b109404cd94d778893b108553c39d2f07fdff700b7b78ba209ae8de92c"} err="failed to get container status \"99be25b109404cd94d778893b108553c39d2f07fdff700b7b78ba209ae8de92c\": rpc error: code = NotFound desc = could not find container \"99be25b109404cd94d778893b108553c39d2f07fdff700b7b78ba209ae8de92c\": container with ID starting with 99be25b109404cd94d778893b108553c39d2f07fdff700b7b78ba209ae8de92c not found: ID does not exist" Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.657622 4593 scope.go:117] "RemoveContainer" containerID="aca2b65b9701618df8d690cfd050401d6943822d0cdac68f29815b3060b3d6ff" Jan 29 11:35:49 crc kubenswrapper[4593]: E0129 11:35:49.658090 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aca2b65b9701618df8d690cfd050401d6943822d0cdac68f29815b3060b3d6ff\": container with ID starting with aca2b65b9701618df8d690cfd050401d6943822d0cdac68f29815b3060b3d6ff not found: ID does not exist" containerID="aca2b65b9701618df8d690cfd050401d6943822d0cdac68f29815b3060b3d6ff" Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.658136 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aca2b65b9701618df8d690cfd050401d6943822d0cdac68f29815b3060b3d6ff"} err="failed to get container status \"aca2b65b9701618df8d690cfd050401d6943822d0cdac68f29815b3060b3d6ff\": rpc error: code = NotFound desc = could not find container \"aca2b65b9701618df8d690cfd050401d6943822d0cdac68f29815b3060b3d6ff\": container with ID starting with aca2b65b9701618df8d690cfd050401d6943822d0cdac68f29815b3060b3d6ff not found: ID does not exist" Jan 29 11:35:51 crc kubenswrapper[4593]: I0129 11:35:51.085877 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ac08e15-d0dc-4f0e-8704-c1ab168d73c4" path="/var/lib/kubelet/pods/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4/volumes" Jan 29 11:36:11 crc kubenswrapper[4593]: I0129 11:36:11.759234 4593 generic.go:334] "Generic (PLEG): container finished" podID="4c7cff3f-040a-4499-825c-3cccd015326a" containerID="3d1228225b6ffd897296a865f985eb25440e60005ab6ac0ae135485a6d691258" exitCode=0 Jan 29 11:36:11 crc kubenswrapper[4593]: I0129 11:36:11.759369 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" event={"ID":"4c7cff3f-040a-4499-825c-3cccd015326a","Type":"ContainerDied","Data":"3d1228225b6ffd897296a865f985eb25440e60005ab6ac0ae135485a6d691258"} Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.227670 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.338400 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"4c7cff3f-040a-4499-825c-3cccd015326a\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.339625 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-neutron-metadata-combined-ca-bundle\") pod \"4c7cff3f-040a-4499-825c-3cccd015326a\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.339918 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-ssh-key-openstack-edpm-ipam\") pod \"4c7cff3f-040a-4499-825c-3cccd015326a\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.340036 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-nova-metadata-neutron-config-0\") pod \"4c7cff3f-040a-4499-825c-3cccd015326a\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.340218 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxpkb\" (UniqueName: \"kubernetes.io/projected/4c7cff3f-040a-4499-825c-3cccd015326a-kube-api-access-gxpkb\") pod \"4c7cff3f-040a-4499-825c-3cccd015326a\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.340947 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-inventory\") pod \"4c7cff3f-040a-4499-825c-3cccd015326a\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.344338 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c7cff3f-040a-4499-825c-3cccd015326a-kube-api-access-gxpkb" (OuterVolumeSpecName: "kube-api-access-gxpkb") pod "4c7cff3f-040a-4499-825c-3cccd015326a" (UID: "4c7cff3f-040a-4499-825c-3cccd015326a"). InnerVolumeSpecName "kube-api-access-gxpkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.348815 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "4c7cff3f-040a-4499-825c-3cccd015326a" (UID: "4c7cff3f-040a-4499-825c-3cccd015326a"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.369920 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "4c7cff3f-040a-4499-825c-3cccd015326a" (UID: "4c7cff3f-040a-4499-825c-3cccd015326a"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.371173 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-inventory" (OuterVolumeSpecName: "inventory") pod "4c7cff3f-040a-4499-825c-3cccd015326a" (UID: "4c7cff3f-040a-4499-825c-3cccd015326a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.375222 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "4c7cff3f-040a-4499-825c-3cccd015326a" (UID: "4c7cff3f-040a-4499-825c-3cccd015326a"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.378682 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4c7cff3f-040a-4499-825c-3cccd015326a" (UID: "4c7cff3f-040a-4499-825c-3cccd015326a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.443778 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxpkb\" (UniqueName: \"kubernetes.io/projected/4c7cff3f-040a-4499-825c-3cccd015326a-kube-api-access-gxpkb\") on node \"crc\" DevicePath \"\"" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.443809 4593 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.443822 4593 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.443834 4593 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.443843 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.443851 4593 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.782752 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" event={"ID":"4c7cff3f-040a-4499-825c-3cccd015326a","Type":"ContainerDied","Data":"b6601232a02e3d92b3cca5f75209114738f2a4a3ccaef37ffa707cfb7625bc91"} Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.782793 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6601232a02e3d92b3cca5f75209114738f2a4a3ccaef37ffa707cfb7625bc91" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.782849 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.915910 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j"] Jan 29 11:36:13 crc kubenswrapper[4593]: E0129 11:36:13.916389 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c7cff3f-040a-4499-825c-3cccd015326a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.916413 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c7cff3f-040a-4499-825c-3cccd015326a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 29 11:36:13 crc kubenswrapper[4593]: E0129 11:36:13.916427 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ac08e15-d0dc-4f0e-8704-c1ab168d73c4" containerName="extract-utilities" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.916435 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ac08e15-d0dc-4f0e-8704-c1ab168d73c4" containerName="extract-utilities" Jan 29 11:36:13 crc kubenswrapper[4593]: E0129 11:36:13.916457 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ac08e15-d0dc-4f0e-8704-c1ab168d73c4" containerName="extract-content" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.916466 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ac08e15-d0dc-4f0e-8704-c1ab168d73c4" containerName="extract-content" Jan 29 11:36:13 crc kubenswrapper[4593]: E0129 11:36:13.916493 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ac08e15-d0dc-4f0e-8704-c1ab168d73c4" containerName="registry-server" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.916504 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ac08e15-d0dc-4f0e-8704-c1ab168d73c4" containerName="registry-server" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.916723 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c7cff3f-040a-4499-825c-3cccd015326a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.916756 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ac08e15-d0dc-4f0e-8704-c1ab168d73c4" containerName="registry-server" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.917588 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.921344 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.921726 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w4p8f" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.922017 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.922411 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.922698 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.937214 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j"] Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.952713 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jt98j\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.952947 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jt98j\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.953174 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jt98j\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.953259 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9vrm\" (UniqueName: \"kubernetes.io/projected/1f7fe168-4498-4002-9233-d6c2d9f115fb-kube-api-access-w9vrm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jt98j\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.953368 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jt98j\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:14 crc kubenswrapper[4593]: I0129 11:36:14.055203 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jt98j\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:14 crc kubenswrapper[4593]: I0129 11:36:14.055556 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9vrm\" (UniqueName: \"kubernetes.io/projected/1f7fe168-4498-4002-9233-d6c2d9f115fb-kube-api-access-w9vrm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jt98j\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:14 crc kubenswrapper[4593]: I0129 11:36:14.055675 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jt98j\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:14 crc kubenswrapper[4593]: I0129 11:36:14.055777 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jt98j\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:14 crc kubenswrapper[4593]: I0129 11:36:14.055855 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jt98j\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:14 crc kubenswrapper[4593]: I0129 11:36:14.060722 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jt98j\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:14 crc kubenswrapper[4593]: I0129 11:36:14.061060 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jt98j\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:14 crc kubenswrapper[4593]: I0129 11:36:14.063045 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jt98j\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:14 crc kubenswrapper[4593]: I0129 11:36:14.064487 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jt98j\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:14 crc kubenswrapper[4593]: I0129 11:36:14.075449 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9vrm\" (UniqueName: \"kubernetes.io/projected/1f7fe168-4498-4002-9233-d6c2d9f115fb-kube-api-access-w9vrm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jt98j\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:14 crc kubenswrapper[4593]: I0129 11:36:14.242507 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:14 crc kubenswrapper[4593]: I0129 11:36:14.784876 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j"] Jan 29 11:36:14 crc kubenswrapper[4593]: I0129 11:36:14.802787 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" event={"ID":"1f7fe168-4498-4002-9233-d6c2d9f115fb","Type":"ContainerStarted","Data":"630e5bb315500c97ee35063cb0b1025dae526568ec5b2fc147514f582e1d824e"} Jan 29 11:36:15 crc kubenswrapper[4593]: I0129 11:36:15.855591 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" event={"ID":"1f7fe168-4498-4002-9233-d6c2d9f115fb","Type":"ContainerStarted","Data":"d0dad791e1b4a4ce15ef06b2c8538abd555b7ecb9305ee001925866de13618a6"} Jan 29 11:36:15 crc kubenswrapper[4593]: I0129 11:36:15.888952 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" podStartSLOduration=2.407420906 podStartE2EDuration="2.888922484s" podCreationTimestamp="2026-01-29 11:36:13 +0000 UTC" firstStartedPulling="2026-01-29 11:36:14.788251527 +0000 UTC m=+2240.661285718" lastFinishedPulling="2026-01-29 11:36:15.269753105 +0000 UTC m=+2241.142787296" observedRunningTime="2026-01-29 11:36:15.879264792 +0000 UTC m=+2241.752298983" watchObservedRunningTime="2026-01-29 11:36:15.888922484 +0000 UTC m=+2241.761956675" Jan 29 11:37:53 crc kubenswrapper[4593]: I0129 11:37:53.117431 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ppw2m"] Jan 29 11:37:53 crc kubenswrapper[4593]: I0129 11:37:53.120118 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:37:53 crc kubenswrapper[4593]: I0129 11:37:53.126670 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ppw2m"] Jan 29 11:37:53 crc kubenswrapper[4593]: I0129 11:37:53.208191 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-catalog-content\") pod \"certified-operators-ppw2m\" (UID: \"15a2cd22-170c-4450-accf-d5d0a7f5a7f7\") " pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:37:53 crc kubenswrapper[4593]: I0129 11:37:53.208275 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-utilities\") pod \"certified-operators-ppw2m\" (UID: \"15a2cd22-170c-4450-accf-d5d0a7f5a7f7\") " pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:37:53 crc kubenswrapper[4593]: I0129 11:37:53.208389 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfmfx\" (UniqueName: \"kubernetes.io/projected/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-kube-api-access-xfmfx\") pod \"certified-operators-ppw2m\" (UID: \"15a2cd22-170c-4450-accf-d5d0a7f5a7f7\") " pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:37:53 crc kubenswrapper[4593]: I0129 11:37:53.310180 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-catalog-content\") pod \"certified-operators-ppw2m\" (UID: \"15a2cd22-170c-4450-accf-d5d0a7f5a7f7\") " pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:37:53 crc kubenswrapper[4593]: I0129 11:37:53.310282 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-utilities\") pod \"certified-operators-ppw2m\" (UID: \"15a2cd22-170c-4450-accf-d5d0a7f5a7f7\") " pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:37:53 crc kubenswrapper[4593]: I0129 11:37:53.310323 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfmfx\" (UniqueName: \"kubernetes.io/projected/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-kube-api-access-xfmfx\") pod \"certified-operators-ppw2m\" (UID: \"15a2cd22-170c-4450-accf-d5d0a7f5a7f7\") " pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:37:53 crc kubenswrapper[4593]: I0129 11:37:53.310891 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-utilities\") pod \"certified-operators-ppw2m\" (UID: \"15a2cd22-170c-4450-accf-d5d0a7f5a7f7\") " pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:37:53 crc kubenswrapper[4593]: I0129 11:37:53.310926 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-catalog-content\") pod \"certified-operators-ppw2m\" (UID: \"15a2cd22-170c-4450-accf-d5d0a7f5a7f7\") " pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:37:53 crc kubenswrapper[4593]: I0129 11:37:53.340552 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfmfx\" (UniqueName: \"kubernetes.io/projected/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-kube-api-access-xfmfx\") pod \"certified-operators-ppw2m\" (UID: \"15a2cd22-170c-4450-accf-d5d0a7f5a7f7\") " pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:37:53 crc kubenswrapper[4593]: I0129 11:37:53.459880 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:37:54 crc kubenswrapper[4593]: I0129 11:37:54.048039 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ppw2m"] Jan 29 11:37:54 crc kubenswrapper[4593]: I0129 11:37:54.952019 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ppw2m" event={"ID":"15a2cd22-170c-4450-accf-d5d0a7f5a7f7","Type":"ContainerDied","Data":"83534a665108eec509e82cee041b5066f18047792c8dd75b6085bf9d67c580db"} Jan 29 11:37:54 crc kubenswrapper[4593]: I0129 11:37:54.951831 4593 generic.go:334] "Generic (PLEG): container finished" podID="15a2cd22-170c-4450-accf-d5d0a7f5a7f7" containerID="83534a665108eec509e82cee041b5066f18047792c8dd75b6085bf9d67c580db" exitCode=0 Jan 29 11:37:54 crc kubenswrapper[4593]: I0129 11:37:54.952573 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ppw2m" event={"ID":"15a2cd22-170c-4450-accf-d5d0a7f5a7f7","Type":"ContainerStarted","Data":"91ae26aa44dcad5158d3b712c06f9da2552490c44bd51cd521f017e0fab71b0b"} Jan 29 11:37:56 crc kubenswrapper[4593]: I0129 11:37:56.976308 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ppw2m" event={"ID":"15a2cd22-170c-4450-accf-d5d0a7f5a7f7","Type":"ContainerStarted","Data":"785574bc568a7ca45a6d2e331a981b754cdc997f147177f3ed5886d0e204820d"} Jan 29 11:38:00 crc kubenswrapper[4593]: I0129 11:38:00.011697 4593 generic.go:334] "Generic (PLEG): container finished" podID="15a2cd22-170c-4450-accf-d5d0a7f5a7f7" containerID="785574bc568a7ca45a6d2e331a981b754cdc997f147177f3ed5886d0e204820d" exitCode=0 Jan 29 11:38:00 crc kubenswrapper[4593]: I0129 11:38:00.011783 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ppw2m" event={"ID":"15a2cd22-170c-4450-accf-d5d0a7f5a7f7","Type":"ContainerDied","Data":"785574bc568a7ca45a6d2e331a981b754cdc997f147177f3ed5886d0e204820d"} Jan 29 11:38:01 crc kubenswrapper[4593]: I0129 11:38:01.025669 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ppw2m" event={"ID":"15a2cd22-170c-4450-accf-d5d0a7f5a7f7","Type":"ContainerStarted","Data":"acefa20b6c2a7eb8f2abb5f726a5732dcd40e969e6dc8c07f6e7016031fcae6a"} Jan 29 11:38:01 crc kubenswrapper[4593]: I0129 11:38:01.057699 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ppw2m" podStartSLOduration=2.569719041 podStartE2EDuration="8.057666229s" podCreationTimestamp="2026-01-29 11:37:53 +0000 UTC" firstStartedPulling="2026-01-29 11:37:54.956648186 +0000 UTC m=+2340.829682377" lastFinishedPulling="2026-01-29 11:38:00.444595364 +0000 UTC m=+2346.317629565" observedRunningTime="2026-01-29 11:38:01.047276945 +0000 UTC m=+2346.920311146" watchObservedRunningTime="2026-01-29 11:38:01.057666229 +0000 UTC m=+2346.930700420" Jan 29 11:38:03 crc kubenswrapper[4593]: I0129 11:38:03.460032 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:38:03 crc kubenswrapper[4593]: I0129 11:38:03.460533 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:38:03 crc kubenswrapper[4593]: I0129 11:38:03.946511 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:38:03 crc kubenswrapper[4593]: I0129 11:38:03.946594 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:38:04 crc kubenswrapper[4593]: I0129 11:38:04.503494 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-ppw2m" podUID="15a2cd22-170c-4450-accf-d5d0a7f5a7f7" containerName="registry-server" probeResult="failure" output=< Jan 29 11:38:04 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:38:04 crc kubenswrapper[4593]: > Jan 29 11:38:13 crc kubenswrapper[4593]: I0129 11:38:13.504210 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:38:13 crc kubenswrapper[4593]: I0129 11:38:13.551481 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:38:13 crc kubenswrapper[4593]: I0129 11:38:13.744626 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ppw2m"] Jan 29 11:38:15 crc kubenswrapper[4593]: I0129 11:38:15.184871 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ppw2m" podUID="15a2cd22-170c-4450-accf-d5d0a7f5a7f7" containerName="registry-server" containerID="cri-o://acefa20b6c2a7eb8f2abb5f726a5732dcd40e969e6dc8c07f6e7016031fcae6a" gracePeriod=2 Jan 29 11:38:15 crc kubenswrapper[4593]: I0129 11:38:15.660373 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:38:15 crc kubenswrapper[4593]: I0129 11:38:15.779059 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-catalog-content\") pod \"15a2cd22-170c-4450-accf-d5d0a7f5a7f7\" (UID: \"15a2cd22-170c-4450-accf-d5d0a7f5a7f7\") " Jan 29 11:38:15 crc kubenswrapper[4593]: I0129 11:38:15.779510 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-utilities\") pod \"15a2cd22-170c-4450-accf-d5d0a7f5a7f7\" (UID: \"15a2cd22-170c-4450-accf-d5d0a7f5a7f7\") " Jan 29 11:38:15 crc kubenswrapper[4593]: I0129 11:38:15.779574 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfmfx\" (UniqueName: \"kubernetes.io/projected/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-kube-api-access-xfmfx\") pod \"15a2cd22-170c-4450-accf-d5d0a7f5a7f7\" (UID: \"15a2cd22-170c-4450-accf-d5d0a7f5a7f7\") " Jan 29 11:38:15 crc kubenswrapper[4593]: I0129 11:38:15.781442 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-utilities" (OuterVolumeSpecName: "utilities") pod "15a2cd22-170c-4450-accf-d5d0a7f5a7f7" (UID: "15a2cd22-170c-4450-accf-d5d0a7f5a7f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:38:15 crc kubenswrapper[4593]: I0129 11:38:15.795968 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-kube-api-access-xfmfx" (OuterVolumeSpecName: "kube-api-access-xfmfx") pod "15a2cd22-170c-4450-accf-d5d0a7f5a7f7" (UID: "15a2cd22-170c-4450-accf-d5d0a7f5a7f7"). InnerVolumeSpecName "kube-api-access-xfmfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:38:15 crc kubenswrapper[4593]: I0129 11:38:15.839348 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15a2cd22-170c-4450-accf-d5d0a7f5a7f7" (UID: "15a2cd22-170c-4450-accf-d5d0a7f5a7f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:38:15 crc kubenswrapper[4593]: I0129 11:38:15.881884 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:38:15 crc kubenswrapper[4593]: I0129 11:38:15.881917 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfmfx\" (UniqueName: \"kubernetes.io/projected/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-kube-api-access-xfmfx\") on node \"crc\" DevicePath \"\"" Jan 29 11:38:15 crc kubenswrapper[4593]: I0129 11:38:15.881930 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:38:16 crc kubenswrapper[4593]: I0129 11:38:16.197001 4593 generic.go:334] "Generic (PLEG): container finished" podID="15a2cd22-170c-4450-accf-d5d0a7f5a7f7" containerID="acefa20b6c2a7eb8f2abb5f726a5732dcd40e969e6dc8c07f6e7016031fcae6a" exitCode=0 Jan 29 11:38:16 crc kubenswrapper[4593]: I0129 11:38:16.197066 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ppw2m" event={"ID":"15a2cd22-170c-4450-accf-d5d0a7f5a7f7","Type":"ContainerDied","Data":"acefa20b6c2a7eb8f2abb5f726a5732dcd40e969e6dc8c07f6e7016031fcae6a"} Jan 29 11:38:16 crc kubenswrapper[4593]: I0129 11:38:16.197092 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:38:16 crc kubenswrapper[4593]: I0129 11:38:16.197124 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ppw2m" event={"ID":"15a2cd22-170c-4450-accf-d5d0a7f5a7f7","Type":"ContainerDied","Data":"91ae26aa44dcad5158d3b712c06f9da2552490c44bd51cd521f017e0fab71b0b"} Jan 29 11:38:16 crc kubenswrapper[4593]: I0129 11:38:16.197176 4593 scope.go:117] "RemoveContainer" containerID="acefa20b6c2a7eb8f2abb5f726a5732dcd40e969e6dc8c07f6e7016031fcae6a" Jan 29 11:38:16 crc kubenswrapper[4593]: I0129 11:38:16.243301 4593 scope.go:117] "RemoveContainer" containerID="785574bc568a7ca45a6d2e331a981b754cdc997f147177f3ed5886d0e204820d" Jan 29 11:38:16 crc kubenswrapper[4593]: I0129 11:38:16.251073 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ppw2m"] Jan 29 11:38:16 crc kubenswrapper[4593]: I0129 11:38:16.265071 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ppw2m"] Jan 29 11:38:16 crc kubenswrapper[4593]: I0129 11:38:16.266089 4593 scope.go:117] "RemoveContainer" containerID="83534a665108eec509e82cee041b5066f18047792c8dd75b6085bf9d67c580db" Jan 29 11:38:16 crc kubenswrapper[4593]: I0129 11:38:16.318667 4593 scope.go:117] "RemoveContainer" containerID="acefa20b6c2a7eb8f2abb5f726a5732dcd40e969e6dc8c07f6e7016031fcae6a" Jan 29 11:38:16 crc kubenswrapper[4593]: E0129 11:38:16.319069 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acefa20b6c2a7eb8f2abb5f726a5732dcd40e969e6dc8c07f6e7016031fcae6a\": container with ID starting with acefa20b6c2a7eb8f2abb5f726a5732dcd40e969e6dc8c07f6e7016031fcae6a not found: ID does not exist" containerID="acefa20b6c2a7eb8f2abb5f726a5732dcd40e969e6dc8c07f6e7016031fcae6a" Jan 29 11:38:16 crc kubenswrapper[4593]: I0129 11:38:16.319108 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acefa20b6c2a7eb8f2abb5f726a5732dcd40e969e6dc8c07f6e7016031fcae6a"} err="failed to get container status \"acefa20b6c2a7eb8f2abb5f726a5732dcd40e969e6dc8c07f6e7016031fcae6a\": rpc error: code = NotFound desc = could not find container \"acefa20b6c2a7eb8f2abb5f726a5732dcd40e969e6dc8c07f6e7016031fcae6a\": container with ID starting with acefa20b6c2a7eb8f2abb5f726a5732dcd40e969e6dc8c07f6e7016031fcae6a not found: ID does not exist" Jan 29 11:38:16 crc kubenswrapper[4593]: I0129 11:38:16.319178 4593 scope.go:117] "RemoveContainer" containerID="785574bc568a7ca45a6d2e331a981b754cdc997f147177f3ed5886d0e204820d" Jan 29 11:38:16 crc kubenswrapper[4593]: E0129 11:38:16.319389 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"785574bc568a7ca45a6d2e331a981b754cdc997f147177f3ed5886d0e204820d\": container with ID starting with 785574bc568a7ca45a6d2e331a981b754cdc997f147177f3ed5886d0e204820d not found: ID does not exist" containerID="785574bc568a7ca45a6d2e331a981b754cdc997f147177f3ed5886d0e204820d" Jan 29 11:38:16 crc kubenswrapper[4593]: I0129 11:38:16.319409 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"785574bc568a7ca45a6d2e331a981b754cdc997f147177f3ed5886d0e204820d"} err="failed to get container status \"785574bc568a7ca45a6d2e331a981b754cdc997f147177f3ed5886d0e204820d\": rpc error: code = NotFound desc = could not find container \"785574bc568a7ca45a6d2e331a981b754cdc997f147177f3ed5886d0e204820d\": container with ID starting with 785574bc568a7ca45a6d2e331a981b754cdc997f147177f3ed5886d0e204820d not found: ID does not exist" Jan 29 11:38:16 crc kubenswrapper[4593]: I0129 11:38:16.319423 4593 scope.go:117] "RemoveContainer" containerID="83534a665108eec509e82cee041b5066f18047792c8dd75b6085bf9d67c580db" Jan 29 11:38:16 crc kubenswrapper[4593]: E0129 11:38:16.319806 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83534a665108eec509e82cee041b5066f18047792c8dd75b6085bf9d67c580db\": container with ID starting with 83534a665108eec509e82cee041b5066f18047792c8dd75b6085bf9d67c580db not found: ID does not exist" containerID="83534a665108eec509e82cee041b5066f18047792c8dd75b6085bf9d67c580db" Jan 29 11:38:16 crc kubenswrapper[4593]: I0129 11:38:16.319828 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83534a665108eec509e82cee041b5066f18047792c8dd75b6085bf9d67c580db"} err="failed to get container status \"83534a665108eec509e82cee041b5066f18047792c8dd75b6085bf9d67c580db\": rpc error: code = NotFound desc = could not find container \"83534a665108eec509e82cee041b5066f18047792c8dd75b6085bf9d67c580db\": container with ID starting with 83534a665108eec509e82cee041b5066f18047792c8dd75b6085bf9d67c580db not found: ID does not exist" Jan 29 11:38:17 crc kubenswrapper[4593]: I0129 11:38:17.092479 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15a2cd22-170c-4450-accf-d5d0a7f5a7f7" path="/var/lib/kubelet/pods/15a2cd22-170c-4450-accf-d5d0a7f5a7f7/volumes" Jan 29 11:38:33 crc kubenswrapper[4593]: I0129 11:38:33.945967 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:38:33 crc kubenswrapper[4593]: I0129 11:38:33.946599 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:39:03 crc kubenswrapper[4593]: I0129 11:39:03.946779 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:39:03 crc kubenswrapper[4593]: I0129 11:39:03.947561 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:39:03 crc kubenswrapper[4593]: I0129 11:39:03.947615 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 11:39:03 crc kubenswrapper[4593]: I0129 11:39:03.948651 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:39:03 crc kubenswrapper[4593]: I0129 11:39:03.948819 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" gracePeriod=600 Jan 29 11:39:04 crc kubenswrapper[4593]: E0129 11:39:04.074456 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:39:04 crc kubenswrapper[4593]: I0129 11:39:04.647962 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" exitCode=0 Jan 29 11:39:04 crc kubenswrapper[4593]: I0129 11:39:04.648015 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed"} Jan 29 11:39:04 crc kubenswrapper[4593]: I0129 11:39:04.648130 4593 scope.go:117] "RemoveContainer" containerID="466a93f4cbc41eff7fb78889db6079a8dd1f4541d541aedd9f60554c729b2972" Jan 29 11:39:04 crc kubenswrapper[4593]: I0129 11:39:04.649022 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:39:04 crc kubenswrapper[4593]: E0129 11:39:04.649756 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:39:18 crc kubenswrapper[4593]: I0129 11:39:18.075811 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:39:18 crc kubenswrapper[4593]: E0129 11:39:18.077039 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:39:30 crc kubenswrapper[4593]: I0129 11:39:30.076954 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:39:30 crc kubenswrapper[4593]: E0129 11:39:30.077745 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:39:45 crc kubenswrapper[4593]: I0129 11:39:45.081811 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:39:45 crc kubenswrapper[4593]: E0129 11:39:45.082690 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:39:59 crc kubenswrapper[4593]: I0129 11:39:59.075417 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:39:59 crc kubenswrapper[4593]: E0129 11:39:59.076383 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:40:13 crc kubenswrapper[4593]: I0129 11:40:13.075662 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:40:13 crc kubenswrapper[4593]: E0129 11:40:13.076458 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:40:26 crc kubenswrapper[4593]: I0129 11:40:26.077016 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:40:26 crc kubenswrapper[4593]: E0129 11:40:26.079694 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:40:37 crc kubenswrapper[4593]: I0129 11:40:37.074903 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:40:37 crc kubenswrapper[4593]: E0129 11:40:37.075549 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:40:48 crc kubenswrapper[4593]: I0129 11:40:48.075334 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:40:48 crc kubenswrapper[4593]: E0129 11:40:48.077334 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:40:59 crc kubenswrapper[4593]: I0129 11:40:59.077417 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:40:59 crc kubenswrapper[4593]: E0129 11:40:59.078312 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:41:07 crc kubenswrapper[4593]: I0129 11:41:07.862789 4593 generic.go:334] "Generic (PLEG): container finished" podID="1f7fe168-4498-4002-9233-d6c2d9f115fb" containerID="d0dad791e1b4a4ce15ef06b2c8538abd555b7ecb9305ee001925866de13618a6" exitCode=0 Jan 29 11:41:07 crc kubenswrapper[4593]: I0129 11:41:07.862910 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" event={"ID":"1f7fe168-4498-4002-9233-d6c2d9f115fb","Type":"ContainerDied","Data":"d0dad791e1b4a4ce15ef06b2c8538abd555b7ecb9305ee001925866de13618a6"} Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.296567 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.439397 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9vrm\" (UniqueName: \"kubernetes.io/projected/1f7fe168-4498-4002-9233-d6c2d9f115fb-kube-api-access-w9vrm\") pod \"1f7fe168-4498-4002-9233-d6c2d9f115fb\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.439476 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-ssh-key-openstack-edpm-ipam\") pod \"1f7fe168-4498-4002-9233-d6c2d9f115fb\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.439526 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-libvirt-secret-0\") pod \"1f7fe168-4498-4002-9233-d6c2d9f115fb\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.439598 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-inventory\") pod \"1f7fe168-4498-4002-9233-d6c2d9f115fb\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.439659 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-libvirt-combined-ca-bundle\") pod \"1f7fe168-4498-4002-9233-d6c2d9f115fb\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.450456 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "1f7fe168-4498-4002-9233-d6c2d9f115fb" (UID: "1f7fe168-4498-4002-9233-d6c2d9f115fb"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.451553 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f7fe168-4498-4002-9233-d6c2d9f115fb-kube-api-access-w9vrm" (OuterVolumeSpecName: "kube-api-access-w9vrm") pod "1f7fe168-4498-4002-9233-d6c2d9f115fb" (UID: "1f7fe168-4498-4002-9233-d6c2d9f115fb"). InnerVolumeSpecName "kube-api-access-w9vrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.487058 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-inventory" (OuterVolumeSpecName: "inventory") pod "1f7fe168-4498-4002-9233-d6c2d9f115fb" (UID: "1f7fe168-4498-4002-9233-d6c2d9f115fb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.488978 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "1f7fe168-4498-4002-9233-d6c2d9f115fb" (UID: "1f7fe168-4498-4002-9233-d6c2d9f115fb"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.500227 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1f7fe168-4498-4002-9233-d6c2d9f115fb" (UID: "1f7fe168-4498-4002-9233-d6c2d9f115fb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.541493 4593 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.541546 4593 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.541561 4593 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.541579 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9vrm\" (UniqueName: \"kubernetes.io/projected/1f7fe168-4498-4002-9233-d6c2d9f115fb-kube-api-access-w9vrm\") on node \"crc\" DevicePath \"\"" Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.541594 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.930532 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" event={"ID":"1f7fe168-4498-4002-9233-d6c2d9f115fb","Type":"ContainerDied","Data":"630e5bb315500c97ee35063cb0b1025dae526568ec5b2fc147514f582e1d824e"} Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.930582 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="630e5bb315500c97ee35063cb0b1025dae526568ec5b2fc147514f582e1d824e" Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.930617 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.014026 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg"] Jan 29 11:41:10 crc kubenswrapper[4593]: E0129 11:41:10.014440 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a2cd22-170c-4450-accf-d5d0a7f5a7f7" containerName="extract-content" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.014460 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a2cd22-170c-4450-accf-d5d0a7f5a7f7" containerName="extract-content" Jan 29 11:41:10 crc kubenswrapper[4593]: E0129 11:41:10.014468 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f7fe168-4498-4002-9233-d6c2d9f115fb" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.014475 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f7fe168-4498-4002-9233-d6c2d9f115fb" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 29 11:41:10 crc kubenswrapper[4593]: E0129 11:41:10.014488 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a2cd22-170c-4450-accf-d5d0a7f5a7f7" containerName="registry-server" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.014495 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a2cd22-170c-4450-accf-d5d0a7f5a7f7" containerName="registry-server" Jan 29 11:41:10 crc kubenswrapper[4593]: E0129 11:41:10.014512 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a2cd22-170c-4450-accf-d5d0a7f5a7f7" containerName="extract-utilities" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.014521 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a2cd22-170c-4450-accf-d5d0a7f5a7f7" containerName="extract-utilities" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.014753 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="15a2cd22-170c-4450-accf-d5d0a7f5a7f7" containerName="registry-server" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.014772 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f7fe168-4498-4002-9233-d6c2d9f115fb" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.015483 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.017604 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.021522 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.021794 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w4p8f" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.022338 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.022513 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.022360 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.022982 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.039718 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg"] Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.088677 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.088721 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjx4m\" (UniqueName: \"kubernetes.io/projected/f45f3aca-42e1-4105-b843-f5288550ce8c-kube-api-access-wjx4m\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.088792 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.088845 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.088864 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.088889 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.088927 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.089643 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.089831 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.191879 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.192210 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.192263 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.192306 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.192344 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.192379 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.192395 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjx4m\" (UniqueName: \"kubernetes.io/projected/f45f3aca-42e1-4105-b843-f5288550ce8c-kube-api-access-wjx4m\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.192453 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.192530 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.194450 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.196675 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.197644 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.199312 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.200004 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.200272 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.202051 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.204121 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.215008 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjx4m\" (UniqueName: \"kubernetes.io/projected/f45f3aca-42e1-4105-b843-f5288550ce8c-kube-api-access-wjx4m\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.332726 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.872942 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg"] Jan 29 11:41:10 crc kubenswrapper[4593]: W0129 11:41:10.878872 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf45f3aca_42e1_4105_b843_f5288550ce8c.slice/crio-3286f80b88576b78785252947cf8aa107ce6da1b610419348066eb6fc41347d2 WatchSource:0}: Error finding container 3286f80b88576b78785252947cf8aa107ce6da1b610419348066eb6fc41347d2: Status 404 returned error can't find the container with id 3286f80b88576b78785252947cf8aa107ce6da1b610419348066eb6fc41347d2 Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.882593 4593 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.944215 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" event={"ID":"f45f3aca-42e1-4105-b843-f5288550ce8c","Type":"ContainerStarted","Data":"3286f80b88576b78785252947cf8aa107ce6da1b610419348066eb6fc41347d2"} Jan 29 11:41:11 crc kubenswrapper[4593]: I0129 11:41:11.954352 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" event={"ID":"f45f3aca-42e1-4105-b843-f5288550ce8c","Type":"ContainerStarted","Data":"b57226db838e93862713f292f9315141a4f22f891753ea3cbd93990d176edcc4"} Jan 29 11:41:11 crc kubenswrapper[4593]: I0129 11:41:11.976811 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" podStartSLOduration=2.373611192 podStartE2EDuration="2.976793336s" podCreationTimestamp="2026-01-29 11:41:09 +0000 UTC" firstStartedPulling="2026-01-29 11:41:10.882185286 +0000 UTC m=+2536.755219487" lastFinishedPulling="2026-01-29 11:41:11.4853674 +0000 UTC m=+2537.358401631" observedRunningTime="2026-01-29 11:41:11.969725963 +0000 UTC m=+2537.842760154" watchObservedRunningTime="2026-01-29 11:41:11.976793336 +0000 UTC m=+2537.849827527" Jan 29 11:41:14 crc kubenswrapper[4593]: I0129 11:41:14.075509 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:41:14 crc kubenswrapper[4593]: E0129 11:41:14.076317 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:41:26 crc kubenswrapper[4593]: I0129 11:41:26.075750 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:41:26 crc kubenswrapper[4593]: E0129 11:41:26.076703 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:41:39 crc kubenswrapper[4593]: I0129 11:41:39.075284 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:41:39 crc kubenswrapper[4593]: E0129 11:41:39.076013 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:41:51 crc kubenswrapper[4593]: I0129 11:41:51.075521 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:41:51 crc kubenswrapper[4593]: E0129 11:41:51.076231 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:42:03 crc kubenswrapper[4593]: I0129 11:42:03.075256 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:42:03 crc kubenswrapper[4593]: E0129 11:42:03.076042 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:42:05 crc kubenswrapper[4593]: I0129 11:42:05.713532 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-drmwg"] Jan 29 11:42:05 crc kubenswrapper[4593]: I0129 11:42:05.717575 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:42:05 crc kubenswrapper[4593]: I0129 11:42:05.737165 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-drmwg"] Jan 29 11:42:05 crc kubenswrapper[4593]: I0129 11:42:05.743957 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69e48707-1458-40da-aa50-9f79ccef1297-catalog-content\") pod \"redhat-operators-drmwg\" (UID: \"69e48707-1458-40da-aa50-9f79ccef1297\") " pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:42:05 crc kubenswrapper[4593]: I0129 11:42:05.744078 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69e48707-1458-40da-aa50-9f79ccef1297-utilities\") pod \"redhat-operators-drmwg\" (UID: \"69e48707-1458-40da-aa50-9f79ccef1297\") " pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:42:05 crc kubenswrapper[4593]: I0129 11:42:05.744127 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnxkb\" (UniqueName: \"kubernetes.io/projected/69e48707-1458-40da-aa50-9f79ccef1297-kube-api-access-mnxkb\") pod \"redhat-operators-drmwg\" (UID: \"69e48707-1458-40da-aa50-9f79ccef1297\") " pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:42:05 crc kubenswrapper[4593]: I0129 11:42:05.845693 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69e48707-1458-40da-aa50-9f79ccef1297-utilities\") pod \"redhat-operators-drmwg\" (UID: \"69e48707-1458-40da-aa50-9f79ccef1297\") " pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:42:05 crc kubenswrapper[4593]: I0129 11:42:05.845979 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnxkb\" (UniqueName: \"kubernetes.io/projected/69e48707-1458-40da-aa50-9f79ccef1297-kube-api-access-mnxkb\") pod \"redhat-operators-drmwg\" (UID: \"69e48707-1458-40da-aa50-9f79ccef1297\") " pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:42:05 crc kubenswrapper[4593]: I0129 11:42:05.846093 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69e48707-1458-40da-aa50-9f79ccef1297-catalog-content\") pod \"redhat-operators-drmwg\" (UID: \"69e48707-1458-40da-aa50-9f79ccef1297\") " pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:42:05 crc kubenswrapper[4593]: I0129 11:42:05.846193 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69e48707-1458-40da-aa50-9f79ccef1297-utilities\") pod \"redhat-operators-drmwg\" (UID: \"69e48707-1458-40da-aa50-9f79ccef1297\") " pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:42:05 crc kubenswrapper[4593]: I0129 11:42:05.846400 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69e48707-1458-40da-aa50-9f79ccef1297-catalog-content\") pod \"redhat-operators-drmwg\" (UID: \"69e48707-1458-40da-aa50-9f79ccef1297\") " pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:42:05 crc kubenswrapper[4593]: I0129 11:42:05.867576 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnxkb\" (UniqueName: \"kubernetes.io/projected/69e48707-1458-40da-aa50-9f79ccef1297-kube-api-access-mnxkb\") pod \"redhat-operators-drmwg\" (UID: \"69e48707-1458-40da-aa50-9f79ccef1297\") " pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:42:06 crc kubenswrapper[4593]: I0129 11:42:06.053491 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:42:06 crc kubenswrapper[4593]: I0129 11:42:06.593235 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-drmwg"] Jan 29 11:42:06 crc kubenswrapper[4593]: I0129 11:42:06.648483 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drmwg" event={"ID":"69e48707-1458-40da-aa50-9f79ccef1297","Type":"ContainerStarted","Data":"41ece5201791f94fd3acfd06ed7b4e84ad465e9e4c76175fabaa5d1d99f6ff8c"} Jan 29 11:42:07 crc kubenswrapper[4593]: I0129 11:42:07.658456 4593 generic.go:334] "Generic (PLEG): container finished" podID="69e48707-1458-40da-aa50-9f79ccef1297" containerID="546a626c87c51d9828f84ff955e47e34632be3a5a97a906e461655256d23a2ee" exitCode=0 Jan 29 11:42:07 crc kubenswrapper[4593]: I0129 11:42:07.658509 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drmwg" event={"ID":"69e48707-1458-40da-aa50-9f79ccef1297","Type":"ContainerDied","Data":"546a626c87c51d9828f84ff955e47e34632be3a5a97a906e461655256d23a2ee"} Jan 29 11:42:09 crc kubenswrapper[4593]: I0129 11:42:09.685774 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drmwg" event={"ID":"69e48707-1458-40da-aa50-9f79ccef1297","Type":"ContainerStarted","Data":"4466b7438953b7729477426864e06f6fc38e7a892c32b2c67a1c6fcbc7d9910a"} Jan 29 11:42:14 crc kubenswrapper[4593]: I0129 11:42:14.747523 4593 generic.go:334] "Generic (PLEG): container finished" podID="69e48707-1458-40da-aa50-9f79ccef1297" containerID="4466b7438953b7729477426864e06f6fc38e7a892c32b2c67a1c6fcbc7d9910a" exitCode=0 Jan 29 11:42:14 crc kubenswrapper[4593]: I0129 11:42:14.747669 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drmwg" event={"ID":"69e48707-1458-40da-aa50-9f79ccef1297","Type":"ContainerDied","Data":"4466b7438953b7729477426864e06f6fc38e7a892c32b2c67a1c6fcbc7d9910a"} Jan 29 11:42:15 crc kubenswrapper[4593]: I0129 11:42:15.082882 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:42:15 crc kubenswrapper[4593]: E0129 11:42:15.083557 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:42:15 crc kubenswrapper[4593]: I0129 11:42:15.763149 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drmwg" event={"ID":"69e48707-1458-40da-aa50-9f79ccef1297","Type":"ContainerStarted","Data":"e0a4de155f1706caf798b8e641d8c88b3dc6c8fdf3467bc7dc36324000f96fb8"} Jan 29 11:42:15 crc kubenswrapper[4593]: I0129 11:42:15.797563 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-drmwg" podStartSLOduration=3.1882229029999998 podStartE2EDuration="10.79753768s" podCreationTimestamp="2026-01-29 11:42:05 +0000 UTC" firstStartedPulling="2026-01-29 11:42:07.660485001 +0000 UTC m=+2593.533519192" lastFinishedPulling="2026-01-29 11:42:15.269799778 +0000 UTC m=+2601.142833969" observedRunningTime="2026-01-29 11:42:15.792417551 +0000 UTC m=+2601.665451742" watchObservedRunningTime="2026-01-29 11:42:15.79753768 +0000 UTC m=+2601.670571871" Jan 29 11:42:16 crc kubenswrapper[4593]: I0129 11:42:16.054840 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:42:16 crc kubenswrapper[4593]: I0129 11:42:16.054914 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:42:17 crc kubenswrapper[4593]: I0129 11:42:17.099450 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-drmwg" podUID="69e48707-1458-40da-aa50-9f79ccef1297" containerName="registry-server" probeResult="failure" output=< Jan 29 11:42:17 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:42:17 crc kubenswrapper[4593]: > Jan 29 11:42:27 crc kubenswrapper[4593]: I0129 11:42:27.105873 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-drmwg" podUID="69e48707-1458-40da-aa50-9f79ccef1297" containerName="registry-server" probeResult="failure" output=< Jan 29 11:42:27 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:42:27 crc kubenswrapper[4593]: > Jan 29 11:42:29 crc kubenswrapper[4593]: I0129 11:42:29.077750 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:42:29 crc kubenswrapper[4593]: E0129 11:42:29.078309 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:42:37 crc kubenswrapper[4593]: I0129 11:42:37.101225 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-drmwg" podUID="69e48707-1458-40da-aa50-9f79ccef1297" containerName="registry-server" probeResult="failure" output=< Jan 29 11:42:37 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:42:37 crc kubenswrapper[4593]: > Jan 29 11:42:43 crc kubenswrapper[4593]: I0129 11:42:43.075525 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:42:43 crc kubenswrapper[4593]: E0129 11:42:43.076219 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:42:47 crc kubenswrapper[4593]: I0129 11:42:47.099502 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-drmwg" podUID="69e48707-1458-40da-aa50-9f79ccef1297" containerName="registry-server" probeResult="failure" output=< Jan 29 11:42:47 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:42:47 crc kubenswrapper[4593]: > Jan 29 11:42:57 crc kubenswrapper[4593]: I0129 11:42:57.075143 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:42:57 crc kubenswrapper[4593]: E0129 11:42:57.075883 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:42:57 crc kubenswrapper[4593]: I0129 11:42:57.101955 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-drmwg" podUID="69e48707-1458-40da-aa50-9f79ccef1297" containerName="registry-server" probeResult="failure" output=< Jan 29 11:42:57 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:42:57 crc kubenswrapper[4593]: > Jan 29 11:43:06 crc kubenswrapper[4593]: I0129 11:43:06.112189 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:43:06 crc kubenswrapper[4593]: I0129 11:43:06.163301 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:43:06 crc kubenswrapper[4593]: I0129 11:43:06.936134 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-drmwg"] Jan 29 11:43:07 crc kubenswrapper[4593]: I0129 11:43:07.299180 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-drmwg" podUID="69e48707-1458-40da-aa50-9f79ccef1297" containerName="registry-server" containerID="cri-o://e0a4de155f1706caf798b8e641d8c88b3dc6c8fdf3467bc7dc36324000f96fb8" gracePeriod=2 Jan 29 11:43:07 crc kubenswrapper[4593]: I0129 11:43:07.820491 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.015327 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69e48707-1458-40da-aa50-9f79ccef1297-catalog-content\") pod \"69e48707-1458-40da-aa50-9f79ccef1297\" (UID: \"69e48707-1458-40da-aa50-9f79ccef1297\") " Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.015481 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69e48707-1458-40da-aa50-9f79ccef1297-utilities\") pod \"69e48707-1458-40da-aa50-9f79ccef1297\" (UID: \"69e48707-1458-40da-aa50-9f79ccef1297\") " Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.015782 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnxkb\" (UniqueName: \"kubernetes.io/projected/69e48707-1458-40da-aa50-9f79ccef1297-kube-api-access-mnxkb\") pod \"69e48707-1458-40da-aa50-9f79ccef1297\" (UID: \"69e48707-1458-40da-aa50-9f79ccef1297\") " Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.016319 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69e48707-1458-40da-aa50-9f79ccef1297-utilities" (OuterVolumeSpecName: "utilities") pod "69e48707-1458-40da-aa50-9f79ccef1297" (UID: "69e48707-1458-40da-aa50-9f79ccef1297"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.021563 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69e48707-1458-40da-aa50-9f79ccef1297-kube-api-access-mnxkb" (OuterVolumeSpecName: "kube-api-access-mnxkb") pod "69e48707-1458-40da-aa50-9f79ccef1297" (UID: "69e48707-1458-40da-aa50-9f79ccef1297"). InnerVolumeSpecName "kube-api-access-mnxkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.118158 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnxkb\" (UniqueName: \"kubernetes.io/projected/69e48707-1458-40da-aa50-9f79ccef1297-kube-api-access-mnxkb\") on node \"crc\" DevicePath \"\"" Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.118205 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69e48707-1458-40da-aa50-9f79ccef1297-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.144195 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69e48707-1458-40da-aa50-9f79ccef1297-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69e48707-1458-40da-aa50-9f79ccef1297" (UID: "69e48707-1458-40da-aa50-9f79ccef1297"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.220202 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69e48707-1458-40da-aa50-9f79ccef1297-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.320974 4593 generic.go:334] "Generic (PLEG): container finished" podID="69e48707-1458-40da-aa50-9f79ccef1297" containerID="e0a4de155f1706caf798b8e641d8c88b3dc6c8fdf3467bc7dc36324000f96fb8" exitCode=0 Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.321043 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drmwg" event={"ID":"69e48707-1458-40da-aa50-9f79ccef1297","Type":"ContainerDied","Data":"e0a4de155f1706caf798b8e641d8c88b3dc6c8fdf3467bc7dc36324000f96fb8"} Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.321088 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drmwg" event={"ID":"69e48707-1458-40da-aa50-9f79ccef1297","Type":"ContainerDied","Data":"41ece5201791f94fd3acfd06ed7b4e84ad465e9e4c76175fabaa5d1d99f6ff8c"} Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.321109 4593 scope.go:117] "RemoveContainer" containerID="e0a4de155f1706caf798b8e641d8c88b3dc6c8fdf3467bc7dc36324000f96fb8" Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.321337 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.367959 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-drmwg"] Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.369181 4593 scope.go:117] "RemoveContainer" containerID="4466b7438953b7729477426864e06f6fc38e7a892c32b2c67a1c6fcbc7d9910a" Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.378668 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-drmwg"] Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.401885 4593 scope.go:117] "RemoveContainer" containerID="546a626c87c51d9828f84ff955e47e34632be3a5a97a906e461655256d23a2ee" Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.444614 4593 scope.go:117] "RemoveContainer" containerID="e0a4de155f1706caf798b8e641d8c88b3dc6c8fdf3467bc7dc36324000f96fb8" Jan 29 11:43:08 crc kubenswrapper[4593]: E0129 11:43:08.445183 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0a4de155f1706caf798b8e641d8c88b3dc6c8fdf3467bc7dc36324000f96fb8\": container with ID starting with e0a4de155f1706caf798b8e641d8c88b3dc6c8fdf3467bc7dc36324000f96fb8 not found: ID does not exist" containerID="e0a4de155f1706caf798b8e641d8c88b3dc6c8fdf3467bc7dc36324000f96fb8" Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.445241 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0a4de155f1706caf798b8e641d8c88b3dc6c8fdf3467bc7dc36324000f96fb8"} err="failed to get container status \"e0a4de155f1706caf798b8e641d8c88b3dc6c8fdf3467bc7dc36324000f96fb8\": rpc error: code = NotFound desc = could not find container \"e0a4de155f1706caf798b8e641d8c88b3dc6c8fdf3467bc7dc36324000f96fb8\": container with ID starting with e0a4de155f1706caf798b8e641d8c88b3dc6c8fdf3467bc7dc36324000f96fb8 not found: ID does not exist" Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.445272 4593 scope.go:117] "RemoveContainer" containerID="4466b7438953b7729477426864e06f6fc38e7a892c32b2c67a1c6fcbc7d9910a" Jan 29 11:43:08 crc kubenswrapper[4593]: E0129 11:43:08.445559 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4466b7438953b7729477426864e06f6fc38e7a892c32b2c67a1c6fcbc7d9910a\": container with ID starting with 4466b7438953b7729477426864e06f6fc38e7a892c32b2c67a1c6fcbc7d9910a not found: ID does not exist" containerID="4466b7438953b7729477426864e06f6fc38e7a892c32b2c67a1c6fcbc7d9910a" Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.445583 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4466b7438953b7729477426864e06f6fc38e7a892c32b2c67a1c6fcbc7d9910a"} err="failed to get container status \"4466b7438953b7729477426864e06f6fc38e7a892c32b2c67a1c6fcbc7d9910a\": rpc error: code = NotFound desc = could not find container \"4466b7438953b7729477426864e06f6fc38e7a892c32b2c67a1c6fcbc7d9910a\": container with ID starting with 4466b7438953b7729477426864e06f6fc38e7a892c32b2c67a1c6fcbc7d9910a not found: ID does not exist" Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.445597 4593 scope.go:117] "RemoveContainer" containerID="546a626c87c51d9828f84ff955e47e34632be3a5a97a906e461655256d23a2ee" Jan 29 11:43:08 crc kubenswrapper[4593]: E0129 11:43:08.445872 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"546a626c87c51d9828f84ff955e47e34632be3a5a97a906e461655256d23a2ee\": container with ID starting with 546a626c87c51d9828f84ff955e47e34632be3a5a97a906e461655256d23a2ee not found: ID does not exist" containerID="546a626c87c51d9828f84ff955e47e34632be3a5a97a906e461655256d23a2ee" Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.445892 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"546a626c87c51d9828f84ff955e47e34632be3a5a97a906e461655256d23a2ee"} err="failed to get container status \"546a626c87c51d9828f84ff955e47e34632be3a5a97a906e461655256d23a2ee\": rpc error: code = NotFound desc = could not find container \"546a626c87c51d9828f84ff955e47e34632be3a5a97a906e461655256d23a2ee\": container with ID starting with 546a626c87c51d9828f84ff955e47e34632be3a5a97a906e461655256d23a2ee not found: ID does not exist" Jan 29 11:43:09 crc kubenswrapper[4593]: I0129 11:43:09.087553 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69e48707-1458-40da-aa50-9f79ccef1297" path="/var/lib/kubelet/pods/69e48707-1458-40da-aa50-9f79ccef1297/volumes" Jan 29 11:43:10 crc kubenswrapper[4593]: I0129 11:43:10.075198 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:43:10 crc kubenswrapper[4593]: E0129 11:43:10.075449 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:43:23 crc kubenswrapper[4593]: I0129 11:43:23.075322 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:43:23 crc kubenswrapper[4593]: E0129 11:43:23.076213 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:43:38 crc kubenswrapper[4593]: I0129 11:43:38.076337 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:43:38 crc kubenswrapper[4593]: E0129 11:43:38.076979 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:43:46 crc kubenswrapper[4593]: I0129 11:43:46.685436 4593 generic.go:334] "Generic (PLEG): container finished" podID="f45f3aca-42e1-4105-b843-f5288550ce8c" containerID="b57226db838e93862713f292f9315141a4f22f891753ea3cbd93990d176edcc4" exitCode=0 Jan 29 11:43:46 crc kubenswrapper[4593]: I0129 11:43:46.685550 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" event={"ID":"f45f3aca-42e1-4105-b843-f5288550ce8c","Type":"ContainerDied","Data":"b57226db838e93862713f292f9315141a4f22f891753ea3cbd93990d176edcc4"} Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.705284 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" event={"ID":"f45f3aca-42e1-4105-b843-f5288550ce8c","Type":"ContainerDied","Data":"3286f80b88576b78785252947cf8aa107ce6da1b610419348066eb6fc41347d2"} Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.705828 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3286f80b88576b78785252947cf8aa107ce6da1b610419348066eb6fc41347d2" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.705939 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.854013 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-inventory\") pod \"f45f3aca-42e1-4105-b843-f5288550ce8c\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.854079 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-cell1-compute-config-1\") pod \"f45f3aca-42e1-4105-b843-f5288550ce8c\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.854138 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-migration-ssh-key-0\") pod \"f45f3aca-42e1-4105-b843-f5288550ce8c\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.854161 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-extra-config-0\") pod \"f45f3aca-42e1-4105-b843-f5288550ce8c\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.854243 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-cell1-compute-config-0\") pod \"f45f3aca-42e1-4105-b843-f5288550ce8c\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.854268 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-ssh-key-openstack-edpm-ipam\") pod \"f45f3aca-42e1-4105-b843-f5288550ce8c\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.854384 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-combined-ca-bundle\") pod \"f45f3aca-42e1-4105-b843-f5288550ce8c\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.854407 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-migration-ssh-key-1\") pod \"f45f3aca-42e1-4105-b843-f5288550ce8c\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.854445 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjx4m\" (UniqueName: \"kubernetes.io/projected/f45f3aca-42e1-4105-b843-f5288550ce8c-kube-api-access-wjx4m\") pod \"f45f3aca-42e1-4105-b843-f5288550ce8c\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.873544 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f45f3aca-42e1-4105-b843-f5288550ce8c-kube-api-access-wjx4m" (OuterVolumeSpecName: "kube-api-access-wjx4m") pod "f45f3aca-42e1-4105-b843-f5288550ce8c" (UID: "f45f3aca-42e1-4105-b843-f5288550ce8c"). InnerVolumeSpecName "kube-api-access-wjx4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.880019 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "f45f3aca-42e1-4105-b843-f5288550ce8c" (UID: "f45f3aca-42e1-4105-b843-f5288550ce8c"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.881869 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "f45f3aca-42e1-4105-b843-f5288550ce8c" (UID: "f45f3aca-42e1-4105-b843-f5288550ce8c"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.891463 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "f45f3aca-42e1-4105-b843-f5288550ce8c" (UID: "f45f3aca-42e1-4105-b843-f5288550ce8c"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.894532 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "f45f3aca-42e1-4105-b843-f5288550ce8c" (UID: "f45f3aca-42e1-4105-b843-f5288550ce8c"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.896978 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f45f3aca-42e1-4105-b843-f5288550ce8c" (UID: "f45f3aca-42e1-4105-b843-f5288550ce8c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.897902 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "f45f3aca-42e1-4105-b843-f5288550ce8c" (UID: "f45f3aca-42e1-4105-b843-f5288550ce8c"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.898775 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "f45f3aca-42e1-4105-b843-f5288550ce8c" (UID: "f45f3aca-42e1-4105-b843-f5288550ce8c"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.911387 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-inventory" (OuterVolumeSpecName: "inventory") pod "f45f3aca-42e1-4105-b843-f5288550ce8c" (UID: "f45f3aca-42e1-4105-b843-f5288550ce8c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.956198 4593 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.956247 4593 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.956259 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjx4m\" (UniqueName: \"kubernetes.io/projected/f45f3aca-42e1-4105-b843-f5288550ce8c-kube-api-access-wjx4m\") on node \"crc\" DevicePath \"\"" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.956273 4593 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.956285 4593 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.956297 4593 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.956308 4593 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.956322 4593 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.956335 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.715942 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.880672 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz"] Jan 29 11:43:49 crc kubenswrapper[4593]: E0129 11:43:49.881516 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e48707-1458-40da-aa50-9f79ccef1297" containerName="registry-server" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.881548 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e48707-1458-40da-aa50-9f79ccef1297" containerName="registry-server" Jan 29 11:43:49 crc kubenswrapper[4593]: E0129 11:43:49.881563 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f45f3aca-42e1-4105-b843-f5288550ce8c" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.881571 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f45f3aca-42e1-4105-b843-f5288550ce8c" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 29 11:43:49 crc kubenswrapper[4593]: E0129 11:43:49.881597 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e48707-1458-40da-aa50-9f79ccef1297" containerName="extract-content" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.881603 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e48707-1458-40da-aa50-9f79ccef1297" containerName="extract-content" Jan 29 11:43:49 crc kubenswrapper[4593]: E0129 11:43:49.881618 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e48707-1458-40da-aa50-9f79ccef1297" containerName="extract-utilities" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.881625 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e48707-1458-40da-aa50-9f79ccef1297" containerName="extract-utilities" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.881880 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e48707-1458-40da-aa50-9f79ccef1297" containerName="registry-server" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.881900 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f45f3aca-42e1-4105-b843-f5288550ce8c" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.882698 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.887236 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w4p8f" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.887461 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.889169 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz"] Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.889352 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.889359 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.891773 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.975756 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.975807 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.975850 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfxjf\" (UniqueName: \"kubernetes.io/projected/ee0ea7fe-3ea4-4944-8101-b03f1566882f-kube-api-access-sfxjf\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.976015 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.976060 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.976096 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.976314 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.077825 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.077877 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.077901 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfxjf\" (UniqueName: \"kubernetes.io/projected/ee0ea7fe-3ea4-4944-8101-b03f1566882f-kube-api-access-sfxjf\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.077968 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.078000 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.078027 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.078060 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.082746 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.082947 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.084037 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.084960 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.086179 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.086475 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.101286 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfxjf\" (UniqueName: \"kubernetes.io/projected/ee0ea7fe-3ea4-4944-8101-b03f1566882f-kube-api-access-sfxjf\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.208046 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.556260 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz"] Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.725499 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" event={"ID":"ee0ea7fe-3ea4-4944-8101-b03f1566882f","Type":"ContainerStarted","Data":"059bd591328bff46e6e65cfb00889c1f2fc8ff93c51a070940e99bbd963791fa"} Jan 29 11:43:51 crc kubenswrapper[4593]: I0129 11:43:51.075825 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:43:51 crc kubenswrapper[4593]: E0129 11:43:51.076147 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:43:51 crc kubenswrapper[4593]: I0129 11:43:51.734621 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" event={"ID":"ee0ea7fe-3ea4-4944-8101-b03f1566882f","Type":"ContainerStarted","Data":"f616db1f2537dd79ee16bc7d11fbdfb4f7448ae23d7f778070810ae6e0373cc3"} Jan 29 11:43:51 crc kubenswrapper[4593]: I0129 11:43:51.774208 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" podStartSLOduration=2.370623561 podStartE2EDuration="2.774174985s" podCreationTimestamp="2026-01-29 11:43:49 +0000 UTC" firstStartedPulling="2026-01-29 11:43:50.563841883 +0000 UTC m=+2696.436876094" lastFinishedPulling="2026-01-29 11:43:50.967393317 +0000 UTC m=+2696.840427518" observedRunningTime="2026-01-29 11:43:51.766169808 +0000 UTC m=+2697.639204009" watchObservedRunningTime="2026-01-29 11:43:51.774174985 +0000 UTC m=+2697.647209176" Jan 29 11:44:03 crc kubenswrapper[4593]: I0129 11:44:03.075255 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:44:03 crc kubenswrapper[4593]: E0129 11:44:03.075891 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:44:16 crc kubenswrapper[4593]: I0129 11:44:16.074505 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:44:16 crc kubenswrapper[4593]: I0129 11:44:16.983095 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"bfb82950e01f3d639ea66fd0ea5efa40eb790dae9af6d7372f3c56962ee7ab63"} Jan 29 11:45:00 crc kubenswrapper[4593]: I0129 11:45:00.148766 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl"] Jan 29 11:45:00 crc kubenswrapper[4593]: I0129 11:45:00.150317 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl" Jan 29 11:45:00 crc kubenswrapper[4593]: I0129 11:45:00.153146 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 11:45:00 crc kubenswrapper[4593]: I0129 11:45:00.153854 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 11:45:00 crc kubenswrapper[4593]: I0129 11:45:00.170555 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl"] Jan 29 11:45:00 crc kubenswrapper[4593]: I0129 11:45:00.267095 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-secret-volume\") pod \"collect-profiles-29494785-5jqfl\" (UID: \"dc4e2861-f7e0-40bb-bb77-b0fdd3498554\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl" Jan 29 11:45:00 crc kubenswrapper[4593]: I0129 11:45:00.267160 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh6zp\" (UniqueName: \"kubernetes.io/projected/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-kube-api-access-nh6zp\") pod \"collect-profiles-29494785-5jqfl\" (UID: \"dc4e2861-f7e0-40bb-bb77-b0fdd3498554\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl" Jan 29 11:45:00 crc kubenswrapper[4593]: I0129 11:45:00.267264 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-config-volume\") pod \"collect-profiles-29494785-5jqfl\" (UID: \"dc4e2861-f7e0-40bb-bb77-b0fdd3498554\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl" Jan 29 11:45:00 crc kubenswrapper[4593]: I0129 11:45:00.368721 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-secret-volume\") pod \"collect-profiles-29494785-5jqfl\" (UID: \"dc4e2861-f7e0-40bb-bb77-b0fdd3498554\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl" Jan 29 11:45:00 crc kubenswrapper[4593]: I0129 11:45:00.368763 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh6zp\" (UniqueName: \"kubernetes.io/projected/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-kube-api-access-nh6zp\") pod \"collect-profiles-29494785-5jqfl\" (UID: \"dc4e2861-f7e0-40bb-bb77-b0fdd3498554\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl" Jan 29 11:45:00 crc kubenswrapper[4593]: I0129 11:45:00.368817 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-config-volume\") pod \"collect-profiles-29494785-5jqfl\" (UID: \"dc4e2861-f7e0-40bb-bb77-b0fdd3498554\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl" Jan 29 11:45:00 crc kubenswrapper[4593]: I0129 11:45:00.369746 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-config-volume\") pod \"collect-profiles-29494785-5jqfl\" (UID: \"dc4e2861-f7e0-40bb-bb77-b0fdd3498554\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl" Jan 29 11:45:00 crc kubenswrapper[4593]: I0129 11:45:00.375474 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-secret-volume\") pod \"collect-profiles-29494785-5jqfl\" (UID: \"dc4e2861-f7e0-40bb-bb77-b0fdd3498554\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl" Jan 29 11:45:00 crc kubenswrapper[4593]: I0129 11:45:00.386268 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh6zp\" (UniqueName: \"kubernetes.io/projected/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-kube-api-access-nh6zp\") pod \"collect-profiles-29494785-5jqfl\" (UID: \"dc4e2861-f7e0-40bb-bb77-b0fdd3498554\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl" Jan 29 11:45:00 crc kubenswrapper[4593]: I0129 11:45:00.481151 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl" Jan 29 11:45:00 crc kubenswrapper[4593]: I0129 11:45:00.939792 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl"] Jan 29 11:45:01 crc kubenswrapper[4593]: I0129 11:45:01.410375 4593 generic.go:334] "Generic (PLEG): container finished" podID="dc4e2861-f7e0-40bb-bb77-b0fdd3498554" containerID="774b5de0fbc462ffcb1b94ee57144a8198c30add9d0ae3a9eee99f2a26a14b82" exitCode=0 Jan 29 11:45:01 crc kubenswrapper[4593]: I0129 11:45:01.410480 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl" event={"ID":"dc4e2861-f7e0-40bb-bb77-b0fdd3498554","Type":"ContainerDied","Data":"774b5de0fbc462ffcb1b94ee57144a8198c30add9d0ae3a9eee99f2a26a14b82"} Jan 29 11:45:01 crc kubenswrapper[4593]: I0129 11:45:01.410752 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl" event={"ID":"dc4e2861-f7e0-40bb-bb77-b0fdd3498554","Type":"ContainerStarted","Data":"c88db5300c04314732be5ce93aae32e7d41e372a77e36185fe67c16c38035005"} Jan 29 11:45:02 crc kubenswrapper[4593]: I0129 11:45:02.733966 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl" Jan 29 11:45:02 crc kubenswrapper[4593]: I0129 11:45:02.816677 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nh6zp\" (UniqueName: \"kubernetes.io/projected/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-kube-api-access-nh6zp\") pod \"dc4e2861-f7e0-40bb-bb77-b0fdd3498554\" (UID: \"dc4e2861-f7e0-40bb-bb77-b0fdd3498554\") " Jan 29 11:45:02 crc kubenswrapper[4593]: I0129 11:45:02.816865 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-secret-volume\") pod \"dc4e2861-f7e0-40bb-bb77-b0fdd3498554\" (UID: \"dc4e2861-f7e0-40bb-bb77-b0fdd3498554\") " Jan 29 11:45:02 crc kubenswrapper[4593]: I0129 11:45:02.817053 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-config-volume\") pod \"dc4e2861-f7e0-40bb-bb77-b0fdd3498554\" (UID: \"dc4e2861-f7e0-40bb-bb77-b0fdd3498554\") " Jan 29 11:45:02 crc kubenswrapper[4593]: I0129 11:45:02.818020 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-config-volume" (OuterVolumeSpecName: "config-volume") pod "dc4e2861-f7e0-40bb-bb77-b0fdd3498554" (UID: "dc4e2861-f7e0-40bb-bb77-b0fdd3498554"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:02 crc kubenswrapper[4593]: I0129 11:45:02.822541 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dc4e2861-f7e0-40bb-bb77-b0fdd3498554" (UID: "dc4e2861-f7e0-40bb-bb77-b0fdd3498554"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:45:02 crc kubenswrapper[4593]: I0129 11:45:02.845843 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-kube-api-access-nh6zp" (OuterVolumeSpecName: "kube-api-access-nh6zp") pod "dc4e2861-f7e0-40bb-bb77-b0fdd3498554" (UID: "dc4e2861-f7e0-40bb-bb77-b0fdd3498554"). InnerVolumeSpecName "kube-api-access-nh6zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:45:02 crc kubenswrapper[4593]: I0129 11:45:02.919818 4593 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:02 crc kubenswrapper[4593]: I0129 11:45:02.920141 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nh6zp\" (UniqueName: \"kubernetes.io/projected/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-kube-api-access-nh6zp\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:02 crc kubenswrapper[4593]: I0129 11:45:02.920246 4593 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:03 crc kubenswrapper[4593]: I0129 11:45:03.429069 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl" Jan 29 11:45:03 crc kubenswrapper[4593]: I0129 11:45:03.429029 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl" event={"ID":"dc4e2861-f7e0-40bb-bb77-b0fdd3498554","Type":"ContainerDied","Data":"c88db5300c04314732be5ce93aae32e7d41e372a77e36185fe67c16c38035005"} Jan 29 11:45:03 crc kubenswrapper[4593]: I0129 11:45:03.429942 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c88db5300c04314732be5ce93aae32e7d41e372a77e36185fe67c16c38035005" Jan 29 11:45:03 crc kubenswrapper[4593]: I0129 11:45:03.832967 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm"] Jan 29 11:45:03 crc kubenswrapper[4593]: I0129 11:45:03.842131 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm"] Jan 29 11:45:05 crc kubenswrapper[4593]: I0129 11:45:05.109678 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eef5dc1f-d576-46dd-9de7-2a63c6d4157f" path="/var/lib/kubelet/pods/eef5dc1f-d576-46dd-9de7-2a63c6d4157f/volumes" Jan 29 11:45:15 crc kubenswrapper[4593]: I0129 11:45:15.111358 4593 scope.go:117] "RemoveContainer" containerID="a42849f610d885535cd0e60eaaa2528c5e1fd8e251ed36cfc95a9501172d4972" Jan 29 11:46:33 crc kubenswrapper[4593]: I0129 11:46:33.946417 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:46:33 crc kubenswrapper[4593]: I0129 11:46:33.947138 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:47:03 crc kubenswrapper[4593]: I0129 11:47:03.945762 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:47:03 crc kubenswrapper[4593]: I0129 11:47:03.946400 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:47:17 crc kubenswrapper[4593]: I0129 11:47:17.031941 4593 generic.go:334] "Generic (PLEG): container finished" podID="ee0ea7fe-3ea4-4944-8101-b03f1566882f" containerID="f616db1f2537dd79ee16bc7d11fbdfb4f7448ae23d7f778070810ae6e0373cc3" exitCode=0 Jan 29 11:47:17 crc kubenswrapper[4593]: I0129 11:47:17.033131 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" event={"ID":"ee0ea7fe-3ea4-4944-8101-b03f1566882f","Type":"ContainerDied","Data":"f616db1f2537dd79ee16bc7d11fbdfb4f7448ae23d7f778070810ae6e0373cc3"} Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.520492 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.668171 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-0\") pod \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.668271 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-inventory\") pod \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.668312 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-telemetry-combined-ca-bundle\") pod \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.668365 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfxjf\" (UniqueName: \"kubernetes.io/projected/ee0ea7fe-3ea4-4944-8101-b03f1566882f-kube-api-access-sfxjf\") pod \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.668407 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-2\") pod \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.668460 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-1\") pod \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.668536 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ssh-key-openstack-edpm-ipam\") pod \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.679057 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee0ea7fe-3ea4-4944-8101-b03f1566882f-kube-api-access-sfxjf" (OuterVolumeSpecName: "kube-api-access-sfxjf") pod "ee0ea7fe-3ea4-4944-8101-b03f1566882f" (UID: "ee0ea7fe-3ea4-4944-8101-b03f1566882f"). InnerVolumeSpecName "kube-api-access-sfxjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.679389 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "ee0ea7fe-3ea4-4944-8101-b03f1566882f" (UID: "ee0ea7fe-3ea4-4944-8101-b03f1566882f"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.697906 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "ee0ea7fe-3ea4-4944-8101-b03f1566882f" (UID: "ee0ea7fe-3ea4-4944-8101-b03f1566882f"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.703920 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ee0ea7fe-3ea4-4944-8101-b03f1566882f" (UID: "ee0ea7fe-3ea4-4944-8101-b03f1566882f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.708657 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "ee0ea7fe-3ea4-4944-8101-b03f1566882f" (UID: "ee0ea7fe-3ea4-4944-8101-b03f1566882f"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.720818 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "ee0ea7fe-3ea4-4944-8101-b03f1566882f" (UID: "ee0ea7fe-3ea4-4944-8101-b03f1566882f"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.728721 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-inventory" (OuterVolumeSpecName: "inventory") pod "ee0ea7fe-3ea4-4944-8101-b03f1566882f" (UID: "ee0ea7fe-3ea4-4944-8101-b03f1566882f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.773014 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfxjf\" (UniqueName: \"kubernetes.io/projected/ee0ea7fe-3ea4-4944-8101-b03f1566882f-kube-api-access-sfxjf\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.773065 4593 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.773080 4593 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.773108 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.773122 4593 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.773168 4593 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.773183 4593 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:19 crc kubenswrapper[4593]: I0129 11:47:19.053511 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:47:19 crc kubenswrapper[4593]: I0129 11:47:19.053265 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" event={"ID":"ee0ea7fe-3ea4-4944-8101-b03f1566882f","Type":"ContainerDied","Data":"059bd591328bff46e6e65cfb00889c1f2fc8ff93c51a070940e99bbd963791fa"} Jan 29 11:47:19 crc kubenswrapper[4593]: I0129 11:47:19.053603 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="059bd591328bff46e6e65cfb00889c1f2fc8ff93c51a070940e99bbd963791fa" Jan 29 11:47:33 crc kubenswrapper[4593]: I0129 11:47:33.947877 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:47:33 crc kubenswrapper[4593]: I0129 11:47:33.950004 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:47:33 crc kubenswrapper[4593]: I0129 11:47:33.950106 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 11:47:33 crc kubenswrapper[4593]: I0129 11:47:33.951140 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bfb82950e01f3d639ea66fd0ea5efa40eb790dae9af6d7372f3c56962ee7ab63"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:47:33 crc kubenswrapper[4593]: I0129 11:47:33.951227 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://bfb82950e01f3d639ea66fd0ea5efa40eb790dae9af6d7372f3c56962ee7ab63" gracePeriod=600 Jan 29 11:47:34 crc kubenswrapper[4593]: I0129 11:47:34.210833 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="bfb82950e01f3d639ea66fd0ea5efa40eb790dae9af6d7372f3c56962ee7ab63" exitCode=0 Jan 29 11:47:34 crc kubenswrapper[4593]: I0129 11:47:34.211172 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"bfb82950e01f3d639ea66fd0ea5efa40eb790dae9af6d7372f3c56962ee7ab63"} Jan 29 11:47:34 crc kubenswrapper[4593]: I0129 11:47:34.211255 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:47:35 crc kubenswrapper[4593]: I0129 11:47:35.221536 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2"} Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.151359 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 29 11:48:21 crc kubenswrapper[4593]: E0129 11:48:21.153408 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee0ea7fe-3ea4-4944-8101-b03f1566882f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.153513 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee0ea7fe-3ea4-4944-8101-b03f1566882f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 29 11:48:21 crc kubenswrapper[4593]: E0129 11:48:21.153586 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc4e2861-f7e0-40bb-bb77-b0fdd3498554" containerName="collect-profiles" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.153677 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc4e2861-f7e0-40bb-bb77-b0fdd3498554" containerName="collect-profiles" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.154031 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee0ea7fe-3ea4-4944-8101-b03f1566882f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.154505 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc4e2861-f7e0-40bb-bb77-b0fdd3498554" containerName="collect-profiles" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.155288 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.159288 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.163100 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.167435 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-vt7mb" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.168012 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.177307 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.233450 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs2hc\" (UniqueName: \"kubernetes.io/projected/d5ea9892-a149-4cfe-bb9c-ef636eacd125-kube-api-access-bs2hc\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.233507 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5ea9892-a149-4cfe-bb9c-ef636eacd125-config-data\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.233565 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.233729 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d5ea9892-a149-4cfe-bb9c-ef636eacd125-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.233769 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d5ea9892-a149-4cfe-bb9c-ef636eacd125-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.233810 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.233897 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d5ea9892-a149-4cfe-bb9c-ef636eacd125-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.233924 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.233994 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.336310 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs2hc\" (UniqueName: \"kubernetes.io/projected/d5ea9892-a149-4cfe-bb9c-ef636eacd125-kube-api-access-bs2hc\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.336399 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5ea9892-a149-4cfe-bb9c-ef636eacd125-config-data\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.336464 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.336533 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d5ea9892-a149-4cfe-bb9c-ef636eacd125-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.336569 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d5ea9892-a149-4cfe-bb9c-ef636eacd125-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.336606 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.336746 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d5ea9892-a149-4cfe-bb9c-ef636eacd125-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.336789 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.336860 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.337167 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d5ea9892-a149-4cfe-bb9c-ef636eacd125-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.337181 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.337804 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d5ea9892-a149-4cfe-bb9c-ef636eacd125-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.338087 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d5ea9892-a149-4cfe-bb9c-ef636eacd125-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.343344 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.343486 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.344528 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.350667 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5ea9892-a149-4cfe-bb9c-ef636eacd125-config-data\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.356916 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs2hc\" (UniqueName: \"kubernetes.io/projected/d5ea9892-a149-4cfe-bb9c-ef636eacd125-kube-api-access-bs2hc\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.378505 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.477454 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.958516 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.962886 4593 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 11:48:22 crc kubenswrapper[4593]: I0129 11:48:22.648275 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d5ea9892-a149-4cfe-bb9c-ef636eacd125","Type":"ContainerStarted","Data":"bf88caa96b3fd17945a137b250bf9d7f8872b0e8469ad3aa1ab198d63888646d"} Jan 29 11:49:19 crc kubenswrapper[4593]: E0129 11:49:19.962646 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 29 11:49:19 crc kubenswrapper[4593]: E0129 11:49:19.966491 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bs2hc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(d5ea9892-a149-4cfe-bb9c-ef636eacd125): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:49:19 crc kubenswrapper[4593]: E0129 11:49:19.967765 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="d5ea9892-a149-4cfe-bb9c-ef636eacd125" Jan 29 11:49:20 crc kubenswrapper[4593]: E0129 11:49:20.251467 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="d5ea9892-a149-4cfe-bb9c-ef636eacd125" Jan 29 11:49:35 crc kubenswrapper[4593]: I0129 11:49:35.605380 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 29 11:49:37 crc kubenswrapper[4593]: I0129 11:49:37.447161 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d5ea9892-a149-4cfe-bb9c-ef636eacd125","Type":"ContainerStarted","Data":"f1bbc49dcc0cd36e38a7fd4617bfb0fd01fe811e0e734a91b4f25ae6b23bbeaf"} Jan 29 11:49:37 crc kubenswrapper[4593]: I0129 11:49:37.473016 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.833671652 podStartE2EDuration="1m17.472982474s" podCreationTimestamp="2026-01-29 11:48:20 +0000 UTC" firstStartedPulling="2026-01-29 11:48:21.962536529 +0000 UTC m=+2967.835570720" lastFinishedPulling="2026-01-29 11:49:35.601847351 +0000 UTC m=+3041.474881542" observedRunningTime="2026-01-29 11:49:37.470279221 +0000 UTC m=+3043.343313412" watchObservedRunningTime="2026-01-29 11:49:37.472982474 +0000 UTC m=+3043.346016665" Jan 29 11:50:03 crc kubenswrapper[4593]: I0129 11:50:03.946459 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:50:03 crc kubenswrapper[4593]: I0129 11:50:03.947148 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:50:29 crc kubenswrapper[4593]: I0129 11:50:29.349678 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gjxww"] Jan 29 11:50:29 crc kubenswrapper[4593]: I0129 11:50:29.353269 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:50:29 crc kubenswrapper[4593]: I0129 11:50:29.415810 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbrl2\" (UniqueName: \"kubernetes.io/projected/8e6133a0-5080-40db-ab5c-3f6e365b33f0-kube-api-access-vbrl2\") pod \"redhat-marketplace-gjxww\" (UID: \"8e6133a0-5080-40db-ab5c-3f6e365b33f0\") " pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:50:29 crc kubenswrapper[4593]: I0129 11:50:29.415901 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e6133a0-5080-40db-ab5c-3f6e365b33f0-utilities\") pod \"redhat-marketplace-gjxww\" (UID: \"8e6133a0-5080-40db-ab5c-3f6e365b33f0\") " pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:50:29 crc kubenswrapper[4593]: I0129 11:50:29.415939 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e6133a0-5080-40db-ab5c-3f6e365b33f0-catalog-content\") pod \"redhat-marketplace-gjxww\" (UID: \"8e6133a0-5080-40db-ab5c-3f6e365b33f0\") " pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:50:29 crc kubenswrapper[4593]: I0129 11:50:29.502499 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gjxww"] Jan 29 11:50:29 crc kubenswrapper[4593]: I0129 11:50:29.517393 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e6133a0-5080-40db-ab5c-3f6e365b33f0-catalog-content\") pod \"redhat-marketplace-gjxww\" (UID: \"8e6133a0-5080-40db-ab5c-3f6e365b33f0\") " pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:50:29 crc kubenswrapper[4593]: I0129 11:50:29.518010 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e6133a0-5080-40db-ab5c-3f6e365b33f0-catalog-content\") pod \"redhat-marketplace-gjxww\" (UID: \"8e6133a0-5080-40db-ab5c-3f6e365b33f0\") " pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:50:29 crc kubenswrapper[4593]: I0129 11:50:29.517592 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbrl2\" (UniqueName: \"kubernetes.io/projected/8e6133a0-5080-40db-ab5c-3f6e365b33f0-kube-api-access-vbrl2\") pod \"redhat-marketplace-gjxww\" (UID: \"8e6133a0-5080-40db-ab5c-3f6e365b33f0\") " pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:50:29 crc kubenswrapper[4593]: I0129 11:50:29.518475 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e6133a0-5080-40db-ab5c-3f6e365b33f0-utilities\") pod \"redhat-marketplace-gjxww\" (UID: \"8e6133a0-5080-40db-ab5c-3f6e365b33f0\") " pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:50:29 crc kubenswrapper[4593]: I0129 11:50:29.518837 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e6133a0-5080-40db-ab5c-3f6e365b33f0-utilities\") pod \"redhat-marketplace-gjxww\" (UID: \"8e6133a0-5080-40db-ab5c-3f6e365b33f0\") " pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:50:29 crc kubenswrapper[4593]: I0129 11:50:29.551595 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbrl2\" (UniqueName: \"kubernetes.io/projected/8e6133a0-5080-40db-ab5c-3f6e365b33f0-kube-api-access-vbrl2\") pod \"redhat-marketplace-gjxww\" (UID: \"8e6133a0-5080-40db-ab5c-3f6e365b33f0\") " pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:50:29 crc kubenswrapper[4593]: I0129 11:50:29.687996 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:50:30 crc kubenswrapper[4593]: I0129 11:50:30.496658 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gjxww"] Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.115898 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-58nql"] Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.118877 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.123297 4593 generic.go:334] "Generic (PLEG): container finished" podID="8e6133a0-5080-40db-ab5c-3f6e365b33f0" containerID="93462b5084ae427e1d77c6129f4f72a1b2c59194dab25968640d88484e1a9189" exitCode=0 Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.123376 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjxww" event={"ID":"8e6133a0-5080-40db-ab5c-3f6e365b33f0","Type":"ContainerDied","Data":"93462b5084ae427e1d77c6129f4f72a1b2c59194dab25968640d88484e1a9189"} Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.123423 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjxww" event={"ID":"8e6133a0-5080-40db-ab5c-3f6e365b33f0","Type":"ContainerStarted","Data":"9c4bf50beffc67a77f212f98f53ffeb5265c547884bf5bccd7cd8cbcbe7a9fa7"} Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.135754 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-58nql"] Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.269246 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jpgk\" (UniqueName: \"kubernetes.io/projected/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-kube-api-access-6jpgk\") pod \"certified-operators-58nql\" (UID: \"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb\") " pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.269557 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-utilities\") pod \"certified-operators-58nql\" (UID: \"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb\") " pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.269708 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-catalog-content\") pod \"certified-operators-58nql\" (UID: \"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb\") " pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.371991 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jpgk\" (UniqueName: \"kubernetes.io/projected/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-kube-api-access-6jpgk\") pod \"certified-operators-58nql\" (UID: \"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb\") " pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.372064 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-utilities\") pod \"certified-operators-58nql\" (UID: \"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb\") " pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.372122 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-catalog-content\") pod \"certified-operators-58nql\" (UID: \"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb\") " pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.372663 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-catalog-content\") pod \"certified-operators-58nql\" (UID: \"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb\") " pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.373214 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-utilities\") pod \"certified-operators-58nql\" (UID: \"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb\") " pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.398043 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jpgk\" (UniqueName: \"kubernetes.io/projected/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-kube-api-access-6jpgk\") pod \"certified-operators-58nql\" (UID: \"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb\") " pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.467098 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.775733 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9chvf"] Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.781730 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.790777 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9chvf"] Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.885035 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c132853-6130-49f2-a704-a03e51d90d5b-utilities\") pod \"community-operators-9chvf\" (UID: \"0c132853-6130-49f2-a704-a03e51d90d5b\") " pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.885254 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c132853-6130-49f2-a704-a03e51d90d5b-catalog-content\") pod \"community-operators-9chvf\" (UID: \"0c132853-6130-49f2-a704-a03e51d90d5b\") " pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.885293 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jdrr\" (UniqueName: \"kubernetes.io/projected/0c132853-6130-49f2-a704-a03e51d90d5b-kube-api-access-8jdrr\") pod \"community-operators-9chvf\" (UID: \"0c132853-6130-49f2-a704-a03e51d90d5b\") " pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.988601 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c132853-6130-49f2-a704-a03e51d90d5b-catalog-content\") pod \"community-operators-9chvf\" (UID: \"0c132853-6130-49f2-a704-a03e51d90d5b\") " pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.988678 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jdrr\" (UniqueName: \"kubernetes.io/projected/0c132853-6130-49f2-a704-a03e51d90d5b-kube-api-access-8jdrr\") pod \"community-operators-9chvf\" (UID: \"0c132853-6130-49f2-a704-a03e51d90d5b\") " pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.988779 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c132853-6130-49f2-a704-a03e51d90d5b-utilities\") pod \"community-operators-9chvf\" (UID: \"0c132853-6130-49f2-a704-a03e51d90d5b\") " pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.989112 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c132853-6130-49f2-a704-a03e51d90d5b-catalog-content\") pod \"community-operators-9chvf\" (UID: \"0c132853-6130-49f2-a704-a03e51d90d5b\") " pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.989245 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c132853-6130-49f2-a704-a03e51d90d5b-utilities\") pod \"community-operators-9chvf\" (UID: \"0c132853-6130-49f2-a704-a03e51d90d5b\") " pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:50:32 crc kubenswrapper[4593]: I0129 11:50:32.027887 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-58nql"] Jan 29 11:50:32 crc kubenswrapper[4593]: I0129 11:50:32.032157 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jdrr\" (UniqueName: \"kubernetes.io/projected/0c132853-6130-49f2-a704-a03e51d90d5b-kube-api-access-8jdrr\") pod \"community-operators-9chvf\" (UID: \"0c132853-6130-49f2-a704-a03e51d90d5b\") " pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:50:32 crc kubenswrapper[4593]: I0129 11:50:32.126063 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:50:32 crc kubenswrapper[4593]: I0129 11:50:32.158341 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58nql" event={"ID":"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb","Type":"ContainerStarted","Data":"6ace7fce8dca888321cdd4f035fa5e56a84f122f5c45639df165368111d7df69"} Jan 29 11:50:32 crc kubenswrapper[4593]: I0129 11:50:32.700258 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9chvf"] Jan 29 11:50:33 crc kubenswrapper[4593]: I0129 11:50:33.177162 4593 generic.go:334] "Generic (PLEG): container finished" podID="0c132853-6130-49f2-a704-a03e51d90d5b" containerID="24f34bbff12e05d5eb5edd518ac64bab37b7a12315260730a2c81b88f2777b1f" exitCode=0 Jan 29 11:50:33 crc kubenswrapper[4593]: I0129 11:50:33.178468 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9chvf" event={"ID":"0c132853-6130-49f2-a704-a03e51d90d5b","Type":"ContainerDied","Data":"24f34bbff12e05d5eb5edd518ac64bab37b7a12315260730a2c81b88f2777b1f"} Jan 29 11:50:33 crc kubenswrapper[4593]: I0129 11:50:33.178502 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9chvf" event={"ID":"0c132853-6130-49f2-a704-a03e51d90d5b","Type":"ContainerStarted","Data":"4577186316c08b3900720726645ed16abaae0f401c8a9700e23d4a86b7c97742"} Jan 29 11:50:33 crc kubenswrapper[4593]: I0129 11:50:33.180805 4593 generic.go:334] "Generic (PLEG): container finished" podID="a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" containerID="8eff945346ca74495c997e723f02a87cff7567624b559464a980efc5b2e563d2" exitCode=0 Jan 29 11:50:33 crc kubenswrapper[4593]: I0129 11:50:33.180936 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58nql" event={"ID":"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb","Type":"ContainerDied","Data":"8eff945346ca74495c997e723f02a87cff7567624b559464a980efc5b2e563d2"} Jan 29 11:50:33 crc kubenswrapper[4593]: I0129 11:50:33.945960 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:50:33 crc kubenswrapper[4593]: I0129 11:50:33.946364 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:50:34 crc kubenswrapper[4593]: I0129 11:50:34.205022 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjxww" event={"ID":"8e6133a0-5080-40db-ab5c-3f6e365b33f0","Type":"ContainerStarted","Data":"0a1d0389a4fb73d32a71e925c8059f2300f339d328b9785ed5b7568503bed300"} Jan 29 11:50:34 crc kubenswrapper[4593]: I0129 11:50:34.211846 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58nql" event={"ID":"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb","Type":"ContainerStarted","Data":"90d216c36c6523394d38d71d98b69d37fc329b8967543ce7e528940ec7a880f3"} Jan 29 11:50:36 crc kubenswrapper[4593]: I0129 11:50:36.230111 4593 generic.go:334] "Generic (PLEG): container finished" podID="8e6133a0-5080-40db-ab5c-3f6e365b33f0" containerID="0a1d0389a4fb73d32a71e925c8059f2300f339d328b9785ed5b7568503bed300" exitCode=0 Jan 29 11:50:36 crc kubenswrapper[4593]: I0129 11:50:36.231594 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjxww" event={"ID":"8e6133a0-5080-40db-ab5c-3f6e365b33f0","Type":"ContainerDied","Data":"0a1d0389a4fb73d32a71e925c8059f2300f339d328b9785ed5b7568503bed300"} Jan 29 11:50:36 crc kubenswrapper[4593]: I0129 11:50:36.234997 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9chvf" event={"ID":"0c132853-6130-49f2-a704-a03e51d90d5b","Type":"ContainerStarted","Data":"2755b91fc56bd5e51826f581dce4aa09be5824296781872b46d3ba1906a7a99d"} Jan 29 11:50:39 crc kubenswrapper[4593]: I0129 11:50:39.278149 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjxww" event={"ID":"8e6133a0-5080-40db-ab5c-3f6e365b33f0","Type":"ContainerStarted","Data":"de554a557ce317ac571576096879ffae4aa252bb0b2231e33badc615f0df1f87"} Jan 29 11:50:39 crc kubenswrapper[4593]: I0129 11:50:39.304705 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gjxww" podStartSLOduration=3.591708669 podStartE2EDuration="10.304670768s" podCreationTimestamp="2026-01-29 11:50:29 +0000 UTC" firstStartedPulling="2026-01-29 11:50:31.127002968 +0000 UTC m=+3097.000037149" lastFinishedPulling="2026-01-29 11:50:37.839965057 +0000 UTC m=+3103.712999248" observedRunningTime="2026-01-29 11:50:39.30325248 +0000 UTC m=+3105.176286671" watchObservedRunningTime="2026-01-29 11:50:39.304670768 +0000 UTC m=+3105.177704959" Jan 29 11:50:39 crc kubenswrapper[4593]: I0129 11:50:39.688280 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:50:39 crc kubenswrapper[4593]: I0129 11:50:39.688334 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:50:40 crc kubenswrapper[4593]: I0129 11:50:40.289414 4593 generic.go:334] "Generic (PLEG): container finished" podID="a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" containerID="90d216c36c6523394d38d71d98b69d37fc329b8967543ce7e528940ec7a880f3" exitCode=0 Jan 29 11:50:40 crc kubenswrapper[4593]: I0129 11:50:40.289477 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58nql" event={"ID":"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb","Type":"ContainerDied","Data":"90d216c36c6523394d38d71d98b69d37fc329b8967543ce7e528940ec7a880f3"} Jan 29 11:50:40 crc kubenswrapper[4593]: I0129 11:50:40.738170 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-gjxww" podUID="8e6133a0-5080-40db-ab5c-3f6e365b33f0" containerName="registry-server" probeResult="failure" output=< Jan 29 11:50:40 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:50:40 crc kubenswrapper[4593]: > Jan 29 11:50:41 crc kubenswrapper[4593]: I0129 11:50:41.302323 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58nql" event={"ID":"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb","Type":"ContainerStarted","Data":"5767804e82bb2d97f0d917bf6baa492ae58ff5955034f15cb300cec81e6d1815"} Jan 29 11:50:41 crc kubenswrapper[4593]: I0129 11:50:41.305515 4593 generic.go:334] "Generic (PLEG): container finished" podID="0c132853-6130-49f2-a704-a03e51d90d5b" containerID="2755b91fc56bd5e51826f581dce4aa09be5824296781872b46d3ba1906a7a99d" exitCode=0 Jan 29 11:50:41 crc kubenswrapper[4593]: I0129 11:50:41.305573 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9chvf" event={"ID":"0c132853-6130-49f2-a704-a03e51d90d5b","Type":"ContainerDied","Data":"2755b91fc56bd5e51826f581dce4aa09be5824296781872b46d3ba1906a7a99d"} Jan 29 11:50:41 crc kubenswrapper[4593]: I0129 11:50:41.394009 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-58nql" podStartSLOduration=2.861035154 podStartE2EDuration="10.393985819s" podCreationTimestamp="2026-01-29 11:50:31 +0000 UTC" firstStartedPulling="2026-01-29 11:50:33.194396984 +0000 UTC m=+3099.067431175" lastFinishedPulling="2026-01-29 11:50:40.727347649 +0000 UTC m=+3106.600381840" observedRunningTime="2026-01-29 11:50:41.363862381 +0000 UTC m=+3107.236896572" watchObservedRunningTime="2026-01-29 11:50:41.393985819 +0000 UTC m=+3107.267020020" Jan 29 11:50:41 crc kubenswrapper[4593]: I0129 11:50:41.467684 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:50:41 crc kubenswrapper[4593]: I0129 11:50:41.468010 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:50:42 crc kubenswrapper[4593]: I0129 11:50:42.564919 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-58nql" podUID="a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" containerName="registry-server" probeResult="failure" output=< Jan 29 11:50:42 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:50:42 crc kubenswrapper[4593]: > Jan 29 11:50:43 crc kubenswrapper[4593]: I0129 11:50:43.326575 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9chvf" event={"ID":"0c132853-6130-49f2-a704-a03e51d90d5b","Type":"ContainerStarted","Data":"cb8987f0fd8fc0fa7983abe27e210b86779bfa8f385a1745413e32ba05c15554"} Jan 29 11:50:43 crc kubenswrapper[4593]: I0129 11:50:43.386796 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9chvf" podStartSLOduration=2.637036507 podStartE2EDuration="12.386773561s" podCreationTimestamp="2026-01-29 11:50:31 +0000 UTC" firstStartedPulling="2026-01-29 11:50:33.194190228 +0000 UTC m=+3099.067224419" lastFinishedPulling="2026-01-29 11:50:42.943927282 +0000 UTC m=+3108.816961473" observedRunningTime="2026-01-29 11:50:43.382476994 +0000 UTC m=+3109.255511185" watchObservedRunningTime="2026-01-29 11:50:43.386773561 +0000 UTC m=+3109.259807752" Jan 29 11:50:50 crc kubenswrapper[4593]: I0129 11:50:50.735990 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-gjxww" podUID="8e6133a0-5080-40db-ab5c-3f6e365b33f0" containerName="registry-server" probeResult="failure" output=< Jan 29 11:50:50 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:50:50 crc kubenswrapper[4593]: > Jan 29 11:50:52 crc kubenswrapper[4593]: I0129 11:50:52.126898 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:50:52 crc kubenswrapper[4593]: I0129 11:50:52.126960 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:50:52 crc kubenswrapper[4593]: I0129 11:50:52.542105 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-58nql" podUID="a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" containerName="registry-server" probeResult="failure" output=< Jan 29 11:50:52 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:50:52 crc kubenswrapper[4593]: > Jan 29 11:50:53 crc kubenswrapper[4593]: I0129 11:50:53.185167 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-9chvf" podUID="0c132853-6130-49f2-a704-a03e51d90d5b" containerName="registry-server" probeResult="failure" output=< Jan 29 11:50:53 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:50:53 crc kubenswrapper[4593]: > Jan 29 11:50:59 crc kubenswrapper[4593]: I0129 11:50:59.742093 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:50:59 crc kubenswrapper[4593]: I0129 11:50:59.810537 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:51:00 crc kubenswrapper[4593]: I0129 11:51:00.541427 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gjxww"] Jan 29 11:51:01 crc kubenswrapper[4593]: I0129 11:51:01.502192 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gjxww" podUID="8e6133a0-5080-40db-ab5c-3f6e365b33f0" containerName="registry-server" containerID="cri-o://de554a557ce317ac571576096879ffae4aa252bb0b2231e33badc615f0df1f87" gracePeriod=2 Jan 29 11:51:01 crc kubenswrapper[4593]: I0129 11:51:01.527985 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:51:01 crc kubenswrapper[4593]: I0129 11:51:01.580906 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.189458 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.222218 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.258823 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.320104 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e6133a0-5080-40db-ab5c-3f6e365b33f0-catalog-content\") pod \"8e6133a0-5080-40db-ab5c-3f6e365b33f0\" (UID: \"8e6133a0-5080-40db-ab5c-3f6e365b33f0\") " Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.320246 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e6133a0-5080-40db-ab5c-3f6e365b33f0-utilities\") pod \"8e6133a0-5080-40db-ab5c-3f6e365b33f0\" (UID: \"8e6133a0-5080-40db-ab5c-3f6e365b33f0\") " Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.320374 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbrl2\" (UniqueName: \"kubernetes.io/projected/8e6133a0-5080-40db-ab5c-3f6e365b33f0-kube-api-access-vbrl2\") pod \"8e6133a0-5080-40db-ab5c-3f6e365b33f0\" (UID: \"8e6133a0-5080-40db-ab5c-3f6e365b33f0\") " Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.321247 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e6133a0-5080-40db-ab5c-3f6e365b33f0-utilities" (OuterVolumeSpecName: "utilities") pod "8e6133a0-5080-40db-ab5c-3f6e365b33f0" (UID: "8e6133a0-5080-40db-ab5c-3f6e365b33f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.322795 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e6133a0-5080-40db-ab5c-3f6e365b33f0-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.337969 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e6133a0-5080-40db-ab5c-3f6e365b33f0-kube-api-access-vbrl2" (OuterVolumeSpecName: "kube-api-access-vbrl2") pod "8e6133a0-5080-40db-ab5c-3f6e365b33f0" (UID: "8e6133a0-5080-40db-ab5c-3f6e365b33f0"). InnerVolumeSpecName "kube-api-access-vbrl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.354896 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e6133a0-5080-40db-ab5c-3f6e365b33f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e6133a0-5080-40db-ab5c-3f6e365b33f0" (UID: "8e6133a0-5080-40db-ab5c-3f6e365b33f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.424649 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbrl2\" (UniqueName: \"kubernetes.io/projected/8e6133a0-5080-40db-ab5c-3f6e365b33f0-kube-api-access-vbrl2\") on node \"crc\" DevicePath \"\"" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.424688 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e6133a0-5080-40db-ab5c-3f6e365b33f0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.512830 4593 generic.go:334] "Generic (PLEG): container finished" podID="8e6133a0-5080-40db-ab5c-3f6e365b33f0" containerID="de554a557ce317ac571576096879ffae4aa252bb0b2231e33badc615f0df1f87" exitCode=0 Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.512907 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.512989 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjxww" event={"ID":"8e6133a0-5080-40db-ab5c-3f6e365b33f0","Type":"ContainerDied","Data":"de554a557ce317ac571576096879ffae4aa252bb0b2231e33badc615f0df1f87"} Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.513074 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjxww" event={"ID":"8e6133a0-5080-40db-ab5c-3f6e365b33f0","Type":"ContainerDied","Data":"9c4bf50beffc67a77f212f98f53ffeb5265c547884bf5bccd7cd8cbcbe7a9fa7"} Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.513995 4593 scope.go:117] "RemoveContainer" containerID="de554a557ce317ac571576096879ffae4aa252bb0b2231e33badc615f0df1f87" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.538350 4593 scope.go:117] "RemoveContainer" containerID="0a1d0389a4fb73d32a71e925c8059f2300f339d328b9785ed5b7568503bed300" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.566799 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gjxww"] Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.607567 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gjxww"] Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.626034 4593 scope.go:117] "RemoveContainer" containerID="93462b5084ae427e1d77c6129f4f72a1b2c59194dab25968640d88484e1a9189" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.670849 4593 scope.go:117] "RemoveContainer" containerID="de554a557ce317ac571576096879ffae4aa252bb0b2231e33badc615f0df1f87" Jan 29 11:51:02 crc kubenswrapper[4593]: E0129 11:51:02.671361 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de554a557ce317ac571576096879ffae4aa252bb0b2231e33badc615f0df1f87\": container with ID starting with de554a557ce317ac571576096879ffae4aa252bb0b2231e33badc615f0df1f87 not found: ID does not exist" containerID="de554a557ce317ac571576096879ffae4aa252bb0b2231e33badc615f0df1f87" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.671419 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de554a557ce317ac571576096879ffae4aa252bb0b2231e33badc615f0df1f87"} err="failed to get container status \"de554a557ce317ac571576096879ffae4aa252bb0b2231e33badc615f0df1f87\": rpc error: code = NotFound desc = could not find container \"de554a557ce317ac571576096879ffae4aa252bb0b2231e33badc615f0df1f87\": container with ID starting with de554a557ce317ac571576096879ffae4aa252bb0b2231e33badc615f0df1f87 not found: ID does not exist" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.671441 4593 scope.go:117] "RemoveContainer" containerID="0a1d0389a4fb73d32a71e925c8059f2300f339d328b9785ed5b7568503bed300" Jan 29 11:51:02 crc kubenswrapper[4593]: E0129 11:51:02.671788 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a1d0389a4fb73d32a71e925c8059f2300f339d328b9785ed5b7568503bed300\": container with ID starting with 0a1d0389a4fb73d32a71e925c8059f2300f339d328b9785ed5b7568503bed300 not found: ID does not exist" containerID="0a1d0389a4fb73d32a71e925c8059f2300f339d328b9785ed5b7568503bed300" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.671830 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a1d0389a4fb73d32a71e925c8059f2300f339d328b9785ed5b7568503bed300"} err="failed to get container status \"0a1d0389a4fb73d32a71e925c8059f2300f339d328b9785ed5b7568503bed300\": rpc error: code = NotFound desc = could not find container \"0a1d0389a4fb73d32a71e925c8059f2300f339d328b9785ed5b7568503bed300\": container with ID starting with 0a1d0389a4fb73d32a71e925c8059f2300f339d328b9785ed5b7568503bed300 not found: ID does not exist" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.671848 4593 scope.go:117] "RemoveContainer" containerID="93462b5084ae427e1d77c6129f4f72a1b2c59194dab25968640d88484e1a9189" Jan 29 11:51:02 crc kubenswrapper[4593]: E0129 11:51:02.672479 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93462b5084ae427e1d77c6129f4f72a1b2c59194dab25968640d88484e1a9189\": container with ID starting with 93462b5084ae427e1d77c6129f4f72a1b2c59194dab25968640d88484e1a9189 not found: ID does not exist" containerID="93462b5084ae427e1d77c6129f4f72a1b2c59194dab25968640d88484e1a9189" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.672508 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93462b5084ae427e1d77c6129f4f72a1b2c59194dab25968640d88484e1a9189"} err="failed to get container status \"93462b5084ae427e1d77c6129f4f72a1b2c59194dab25968640d88484e1a9189\": rpc error: code = NotFound desc = could not find container \"93462b5084ae427e1d77c6129f4f72a1b2c59194dab25968640d88484e1a9189\": container with ID starting with 93462b5084ae427e1d77c6129f4f72a1b2c59194dab25968640d88484e1a9189 not found: ID does not exist" Jan 29 11:51:03 crc kubenswrapper[4593]: I0129 11:51:03.087260 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e6133a0-5080-40db-ab5c-3f6e365b33f0" path="/var/lib/kubelet/pods/8e6133a0-5080-40db-ab5c-3f6e365b33f0/volumes" Jan 29 11:51:03 crc kubenswrapper[4593]: I0129 11:51:03.335827 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-58nql"] Jan 29 11:51:03 crc kubenswrapper[4593]: I0129 11:51:03.525949 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-58nql" podUID="a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" containerName="registry-server" containerID="cri-o://5767804e82bb2d97f0d917bf6baa492ae58ff5955034f15cb300cec81e6d1815" gracePeriod=2 Jan 29 11:51:03 crc kubenswrapper[4593]: I0129 11:51:03.949779 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:51:03 crc kubenswrapper[4593]: I0129 11:51:03.950026 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:51:03 crc kubenswrapper[4593]: I0129 11:51:03.950073 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 11:51:03 crc kubenswrapper[4593]: I0129 11:51:03.950905 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:51:03 crc kubenswrapper[4593]: I0129 11:51:03.950952 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" gracePeriod=600 Jan 29 11:51:04 crc kubenswrapper[4593]: E0129 11:51:04.092148 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.251149 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.400594 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-catalog-content\") pod \"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb\" (UID: \"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb\") " Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.400818 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-utilities\") pod \"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb\" (UID: \"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb\") " Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.400842 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jpgk\" (UniqueName: \"kubernetes.io/projected/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-kube-api-access-6jpgk\") pod \"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb\" (UID: \"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb\") " Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.403251 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-utilities" (OuterVolumeSpecName: "utilities") pod "a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" (UID: "a1f44c51-4d7a-46f4-9840-a5ba6f763fbb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.407079 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-kube-api-access-6jpgk" (OuterVolumeSpecName: "kube-api-access-6jpgk") pod "a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" (UID: "a1f44c51-4d7a-46f4-9840-a5ba6f763fbb"). InnerVolumeSpecName "kube-api-access-6jpgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.473458 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" (UID: "a1f44c51-4d7a-46f4-9840-a5ba6f763fbb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.503423 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.503469 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.503485 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jpgk\" (UniqueName: \"kubernetes.io/projected/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-kube-api-access-6jpgk\") on node \"crc\" DevicePath \"\"" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.548850 4593 generic.go:334] "Generic (PLEG): container finished" podID="a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" containerID="5767804e82bb2d97f0d917bf6baa492ae58ff5955034f15cb300cec81e6d1815" exitCode=0 Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.548943 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58nql" event={"ID":"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb","Type":"ContainerDied","Data":"5767804e82bb2d97f0d917bf6baa492ae58ff5955034f15cb300cec81e6d1815"} Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.548975 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58nql" event={"ID":"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb","Type":"ContainerDied","Data":"6ace7fce8dca888321cdd4f035fa5e56a84f122f5c45639df165368111d7df69"} Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.548997 4593 scope.go:117] "RemoveContainer" containerID="5767804e82bb2d97f0d917bf6baa492ae58ff5955034f15cb300cec81e6d1815" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.549139 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.575775 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" exitCode=0 Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.577760 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2"} Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.583420 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:51:04 crc kubenswrapper[4593]: E0129 11:51:04.587010 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.594501 4593 scope.go:117] "RemoveContainer" containerID="90d216c36c6523394d38d71d98b69d37fc329b8967543ce7e528940ec7a880f3" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.613586 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-58nql"] Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.628815 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-58nql"] Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.659888 4593 scope.go:117] "RemoveContainer" containerID="8eff945346ca74495c997e723f02a87cff7567624b559464a980efc5b2e563d2" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.741214 4593 scope.go:117] "RemoveContainer" containerID="5767804e82bb2d97f0d917bf6baa492ae58ff5955034f15cb300cec81e6d1815" Jan 29 11:51:04 crc kubenswrapper[4593]: E0129 11:51:04.741751 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5767804e82bb2d97f0d917bf6baa492ae58ff5955034f15cb300cec81e6d1815\": container with ID starting with 5767804e82bb2d97f0d917bf6baa492ae58ff5955034f15cb300cec81e6d1815 not found: ID does not exist" containerID="5767804e82bb2d97f0d917bf6baa492ae58ff5955034f15cb300cec81e6d1815" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.741794 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5767804e82bb2d97f0d917bf6baa492ae58ff5955034f15cb300cec81e6d1815"} err="failed to get container status \"5767804e82bb2d97f0d917bf6baa492ae58ff5955034f15cb300cec81e6d1815\": rpc error: code = NotFound desc = could not find container \"5767804e82bb2d97f0d917bf6baa492ae58ff5955034f15cb300cec81e6d1815\": container with ID starting with 5767804e82bb2d97f0d917bf6baa492ae58ff5955034f15cb300cec81e6d1815 not found: ID does not exist" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.741822 4593 scope.go:117] "RemoveContainer" containerID="90d216c36c6523394d38d71d98b69d37fc329b8967543ce7e528940ec7a880f3" Jan 29 11:51:04 crc kubenswrapper[4593]: E0129 11:51:04.742045 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90d216c36c6523394d38d71d98b69d37fc329b8967543ce7e528940ec7a880f3\": container with ID starting with 90d216c36c6523394d38d71d98b69d37fc329b8967543ce7e528940ec7a880f3 not found: ID does not exist" containerID="90d216c36c6523394d38d71d98b69d37fc329b8967543ce7e528940ec7a880f3" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.742065 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90d216c36c6523394d38d71d98b69d37fc329b8967543ce7e528940ec7a880f3"} err="failed to get container status \"90d216c36c6523394d38d71d98b69d37fc329b8967543ce7e528940ec7a880f3\": rpc error: code = NotFound desc = could not find container \"90d216c36c6523394d38d71d98b69d37fc329b8967543ce7e528940ec7a880f3\": container with ID starting with 90d216c36c6523394d38d71d98b69d37fc329b8967543ce7e528940ec7a880f3 not found: ID does not exist" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.742081 4593 scope.go:117] "RemoveContainer" containerID="8eff945346ca74495c997e723f02a87cff7567624b559464a980efc5b2e563d2" Jan 29 11:51:04 crc kubenswrapper[4593]: E0129 11:51:04.743513 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8eff945346ca74495c997e723f02a87cff7567624b559464a980efc5b2e563d2\": container with ID starting with 8eff945346ca74495c997e723f02a87cff7567624b559464a980efc5b2e563d2 not found: ID does not exist" containerID="8eff945346ca74495c997e723f02a87cff7567624b559464a980efc5b2e563d2" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.743556 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eff945346ca74495c997e723f02a87cff7567624b559464a980efc5b2e563d2"} err="failed to get container status \"8eff945346ca74495c997e723f02a87cff7567624b559464a980efc5b2e563d2\": rpc error: code = NotFound desc = could not find container \"8eff945346ca74495c997e723f02a87cff7567624b559464a980efc5b2e563d2\": container with ID starting with 8eff945346ca74495c997e723f02a87cff7567624b559464a980efc5b2e563d2 not found: ID does not exist" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.743572 4593 scope.go:117] "RemoveContainer" containerID="bfb82950e01f3d639ea66fd0ea5efa40eb790dae9af6d7372f3c56962ee7ab63" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.755368 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9chvf"] Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.755677 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9chvf" podUID="0c132853-6130-49f2-a704-a03e51d90d5b" containerName="registry-server" containerID="cri-o://cb8987f0fd8fc0fa7983abe27e210b86779bfa8f385a1745413e32ba05c15554" gracePeriod=2 Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.099046 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" path="/var/lib/kubelet/pods/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb/volumes" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.430249 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.530070 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c132853-6130-49f2-a704-a03e51d90d5b-utilities\") pod \"0c132853-6130-49f2-a704-a03e51d90d5b\" (UID: \"0c132853-6130-49f2-a704-a03e51d90d5b\") " Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.530238 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jdrr\" (UniqueName: \"kubernetes.io/projected/0c132853-6130-49f2-a704-a03e51d90d5b-kube-api-access-8jdrr\") pod \"0c132853-6130-49f2-a704-a03e51d90d5b\" (UID: \"0c132853-6130-49f2-a704-a03e51d90d5b\") " Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.530267 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c132853-6130-49f2-a704-a03e51d90d5b-catalog-content\") pod \"0c132853-6130-49f2-a704-a03e51d90d5b\" (UID: \"0c132853-6130-49f2-a704-a03e51d90d5b\") " Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.530921 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c132853-6130-49f2-a704-a03e51d90d5b-utilities" (OuterVolumeSpecName: "utilities") pod "0c132853-6130-49f2-a704-a03e51d90d5b" (UID: "0c132853-6130-49f2-a704-a03e51d90d5b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.560830 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c132853-6130-49f2-a704-a03e51d90d5b-kube-api-access-8jdrr" (OuterVolumeSpecName: "kube-api-access-8jdrr") pod "0c132853-6130-49f2-a704-a03e51d90d5b" (UID: "0c132853-6130-49f2-a704-a03e51d90d5b"). InnerVolumeSpecName "kube-api-access-8jdrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.594305 4593 generic.go:334] "Generic (PLEG): container finished" podID="0c132853-6130-49f2-a704-a03e51d90d5b" containerID="cb8987f0fd8fc0fa7983abe27e210b86779bfa8f385a1745413e32ba05c15554" exitCode=0 Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.594504 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9chvf" event={"ID":"0c132853-6130-49f2-a704-a03e51d90d5b","Type":"ContainerDied","Data":"cb8987f0fd8fc0fa7983abe27e210b86779bfa8f385a1745413e32ba05c15554"} Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.594534 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9chvf" event={"ID":"0c132853-6130-49f2-a704-a03e51d90d5b","Type":"ContainerDied","Data":"4577186316c08b3900720726645ed16abaae0f401c8a9700e23d4a86b7c97742"} Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.594551 4593 scope.go:117] "RemoveContainer" containerID="cb8987f0fd8fc0fa7983abe27e210b86779bfa8f385a1745413e32ba05c15554" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.594671 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.628878 4593 scope.go:117] "RemoveContainer" containerID="2755b91fc56bd5e51826f581dce4aa09be5824296781872b46d3ba1906a7a99d" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.633889 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jdrr\" (UniqueName: \"kubernetes.io/projected/0c132853-6130-49f2-a704-a03e51d90d5b-kube-api-access-8jdrr\") on node \"crc\" DevicePath \"\"" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.633912 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c132853-6130-49f2-a704-a03e51d90d5b-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.636186 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c132853-6130-49f2-a704-a03e51d90d5b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c132853-6130-49f2-a704-a03e51d90d5b" (UID: "0c132853-6130-49f2-a704-a03e51d90d5b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.670973 4593 scope.go:117] "RemoveContainer" containerID="24f34bbff12e05d5eb5edd518ac64bab37b7a12315260730a2c81b88f2777b1f" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.735523 4593 scope.go:117] "RemoveContainer" containerID="cb8987f0fd8fc0fa7983abe27e210b86779bfa8f385a1745413e32ba05c15554" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.735975 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c132853-6130-49f2-a704-a03e51d90d5b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:51:05 crc kubenswrapper[4593]: E0129 11:51:05.739916 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb8987f0fd8fc0fa7983abe27e210b86779bfa8f385a1745413e32ba05c15554\": container with ID starting with cb8987f0fd8fc0fa7983abe27e210b86779bfa8f385a1745413e32ba05c15554 not found: ID does not exist" containerID="cb8987f0fd8fc0fa7983abe27e210b86779bfa8f385a1745413e32ba05c15554" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.740028 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb8987f0fd8fc0fa7983abe27e210b86779bfa8f385a1745413e32ba05c15554"} err="failed to get container status \"cb8987f0fd8fc0fa7983abe27e210b86779bfa8f385a1745413e32ba05c15554\": rpc error: code = NotFound desc = could not find container \"cb8987f0fd8fc0fa7983abe27e210b86779bfa8f385a1745413e32ba05c15554\": container with ID starting with cb8987f0fd8fc0fa7983abe27e210b86779bfa8f385a1745413e32ba05c15554 not found: ID does not exist" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.740061 4593 scope.go:117] "RemoveContainer" containerID="2755b91fc56bd5e51826f581dce4aa09be5824296781872b46d3ba1906a7a99d" Jan 29 11:51:05 crc kubenswrapper[4593]: E0129 11:51:05.740515 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2755b91fc56bd5e51826f581dce4aa09be5824296781872b46d3ba1906a7a99d\": container with ID starting with 2755b91fc56bd5e51826f581dce4aa09be5824296781872b46d3ba1906a7a99d not found: ID does not exist" containerID="2755b91fc56bd5e51826f581dce4aa09be5824296781872b46d3ba1906a7a99d" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.740555 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2755b91fc56bd5e51826f581dce4aa09be5824296781872b46d3ba1906a7a99d"} err="failed to get container status \"2755b91fc56bd5e51826f581dce4aa09be5824296781872b46d3ba1906a7a99d\": rpc error: code = NotFound desc = could not find container \"2755b91fc56bd5e51826f581dce4aa09be5824296781872b46d3ba1906a7a99d\": container with ID starting with 2755b91fc56bd5e51826f581dce4aa09be5824296781872b46d3ba1906a7a99d not found: ID does not exist" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.740574 4593 scope.go:117] "RemoveContainer" containerID="24f34bbff12e05d5eb5edd518ac64bab37b7a12315260730a2c81b88f2777b1f" Jan 29 11:51:05 crc kubenswrapper[4593]: E0129 11:51:05.741230 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24f34bbff12e05d5eb5edd518ac64bab37b7a12315260730a2c81b88f2777b1f\": container with ID starting with 24f34bbff12e05d5eb5edd518ac64bab37b7a12315260730a2c81b88f2777b1f not found: ID does not exist" containerID="24f34bbff12e05d5eb5edd518ac64bab37b7a12315260730a2c81b88f2777b1f" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.741284 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24f34bbff12e05d5eb5edd518ac64bab37b7a12315260730a2c81b88f2777b1f"} err="failed to get container status \"24f34bbff12e05d5eb5edd518ac64bab37b7a12315260730a2c81b88f2777b1f\": rpc error: code = NotFound desc = could not find container \"24f34bbff12e05d5eb5edd518ac64bab37b7a12315260730a2c81b88f2777b1f\": container with ID starting with 24f34bbff12e05d5eb5edd518ac64bab37b7a12315260730a2c81b88f2777b1f not found: ID does not exist" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.932333 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9chvf"] Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.943344 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9chvf"] Jan 29 11:51:07 crc kubenswrapper[4593]: I0129 11:51:07.089472 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c132853-6130-49f2-a704-a03e51d90d5b" path="/var/lib/kubelet/pods/0c132853-6130-49f2-a704-a03e51d90d5b/volumes" Jan 29 11:51:19 crc kubenswrapper[4593]: I0129 11:51:19.075739 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:51:19 crc kubenswrapper[4593]: E0129 11:51:19.077619 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:51:32 crc kubenswrapper[4593]: I0129 11:51:32.074531 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:51:32 crc kubenswrapper[4593]: E0129 11:51:32.075242 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:51:44 crc kubenswrapper[4593]: I0129 11:51:44.075300 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:51:44 crc kubenswrapper[4593]: E0129 11:51:44.076192 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:51:58 crc kubenswrapper[4593]: I0129 11:51:58.075923 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:51:58 crc kubenswrapper[4593]: E0129 11:51:58.076819 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:52:09 crc kubenswrapper[4593]: I0129 11:52:09.075056 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:52:09 crc kubenswrapper[4593]: E0129 11:52:09.075907 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:52:22 crc kubenswrapper[4593]: I0129 11:52:22.075299 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:52:22 crc kubenswrapper[4593]: E0129 11:52:22.076057 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:52:36 crc kubenswrapper[4593]: I0129 11:52:36.075511 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:52:36 crc kubenswrapper[4593]: E0129 11:52:36.076275 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:52:49 crc kubenswrapper[4593]: I0129 11:52:49.075800 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:52:49 crc kubenswrapper[4593]: E0129 11:52:49.076757 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:53:01 crc kubenswrapper[4593]: I0129 11:53:01.075604 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:53:01 crc kubenswrapper[4593]: E0129 11:53:01.076431 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:53:12 crc kubenswrapper[4593]: I0129 11:53:12.075111 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:53:12 crc kubenswrapper[4593]: E0129 11:53:12.076997 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:53:27 crc kubenswrapper[4593]: I0129 11:53:27.075234 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:53:27 crc kubenswrapper[4593]: E0129 11:53:27.076217 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:53:39 crc kubenswrapper[4593]: I0129 11:53:39.076363 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:53:39 crc kubenswrapper[4593]: E0129 11:53:39.077379 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:53:53 crc kubenswrapper[4593]: I0129 11:53:53.074797 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:53:53 crc kubenswrapper[4593]: E0129 11:53:53.076733 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:54:08 crc kubenswrapper[4593]: I0129 11:54:08.075387 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:54:08 crc kubenswrapper[4593]: E0129 11:54:08.076346 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:54:23 crc kubenswrapper[4593]: I0129 11:54:23.075136 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:54:23 crc kubenswrapper[4593]: E0129 11:54:23.075920 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.306026 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bddf7"] Jan 29 11:54:27 crc kubenswrapper[4593]: E0129 11:54:27.307107 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" containerName="extract-utilities" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.307138 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" containerName="extract-utilities" Jan 29 11:54:27 crc kubenswrapper[4593]: E0129 11:54:27.307152 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c132853-6130-49f2-a704-a03e51d90d5b" containerName="extract-utilities" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.307158 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c132853-6130-49f2-a704-a03e51d90d5b" containerName="extract-utilities" Jan 29 11:54:27 crc kubenswrapper[4593]: E0129 11:54:27.307178 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" containerName="extract-content" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.307188 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" containerName="extract-content" Jan 29 11:54:27 crc kubenswrapper[4593]: E0129 11:54:27.307196 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" containerName="registry-server" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.307209 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" containerName="registry-server" Jan 29 11:54:27 crc kubenswrapper[4593]: E0129 11:54:27.307223 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c132853-6130-49f2-a704-a03e51d90d5b" containerName="registry-server" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.307228 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c132853-6130-49f2-a704-a03e51d90d5b" containerName="registry-server" Jan 29 11:54:27 crc kubenswrapper[4593]: E0129 11:54:27.307235 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6133a0-5080-40db-ab5c-3f6e365b33f0" containerName="extract-content" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.307241 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6133a0-5080-40db-ab5c-3f6e365b33f0" containerName="extract-content" Jan 29 11:54:27 crc kubenswrapper[4593]: E0129 11:54:27.307254 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6133a0-5080-40db-ab5c-3f6e365b33f0" containerName="extract-utilities" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.307260 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6133a0-5080-40db-ab5c-3f6e365b33f0" containerName="extract-utilities" Jan 29 11:54:27 crc kubenswrapper[4593]: E0129 11:54:27.307271 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c132853-6130-49f2-a704-a03e51d90d5b" containerName="extract-content" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.307278 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c132853-6130-49f2-a704-a03e51d90d5b" containerName="extract-content" Jan 29 11:54:27 crc kubenswrapper[4593]: E0129 11:54:27.307290 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6133a0-5080-40db-ab5c-3f6e365b33f0" containerName="registry-server" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.307298 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6133a0-5080-40db-ab5c-3f6e365b33f0" containerName="registry-server" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.307537 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c132853-6130-49f2-a704-a03e51d90d5b" containerName="registry-server" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.307557 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e6133a0-5080-40db-ab5c-3f6e365b33f0" containerName="registry-server" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.307572 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" containerName="registry-server" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.308992 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.333417 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bddf7"] Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.470473 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-utilities\") pod \"redhat-operators-bddf7\" (UID: \"e3ea983b-a914-4260-9fe2-8fa75d2f1e08\") " pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.470827 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-catalog-content\") pod \"redhat-operators-bddf7\" (UID: \"e3ea983b-a914-4260-9fe2-8fa75d2f1e08\") " pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.471239 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9qsk\" (UniqueName: \"kubernetes.io/projected/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-kube-api-access-c9qsk\") pod \"redhat-operators-bddf7\" (UID: \"e3ea983b-a914-4260-9fe2-8fa75d2f1e08\") " pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.572919 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9qsk\" (UniqueName: \"kubernetes.io/projected/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-kube-api-access-c9qsk\") pod \"redhat-operators-bddf7\" (UID: \"e3ea983b-a914-4260-9fe2-8fa75d2f1e08\") " pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.573057 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-utilities\") pod \"redhat-operators-bddf7\" (UID: \"e3ea983b-a914-4260-9fe2-8fa75d2f1e08\") " pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.573134 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-catalog-content\") pod \"redhat-operators-bddf7\" (UID: \"e3ea983b-a914-4260-9fe2-8fa75d2f1e08\") " pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.573701 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-utilities\") pod \"redhat-operators-bddf7\" (UID: \"e3ea983b-a914-4260-9fe2-8fa75d2f1e08\") " pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.573826 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-catalog-content\") pod \"redhat-operators-bddf7\" (UID: \"e3ea983b-a914-4260-9fe2-8fa75d2f1e08\") " pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.604394 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9qsk\" (UniqueName: \"kubernetes.io/projected/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-kube-api-access-c9qsk\") pod \"redhat-operators-bddf7\" (UID: \"e3ea983b-a914-4260-9fe2-8fa75d2f1e08\") " pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.640131 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:54:28 crc kubenswrapper[4593]: I0129 11:54:28.207190 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bddf7"] Jan 29 11:54:28 crc kubenswrapper[4593]: I0129 11:54:28.537533 4593 generic.go:334] "Generic (PLEG): container finished" podID="e3ea983b-a914-4260-9fe2-8fa75d2f1e08" containerID="bc929a23cf8d1038032aac760cbbd186410de536e009c9bb9f788f8fc8527d9a" exitCode=0 Jan 29 11:54:28 crc kubenswrapper[4593]: I0129 11:54:28.537578 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bddf7" event={"ID":"e3ea983b-a914-4260-9fe2-8fa75d2f1e08","Type":"ContainerDied","Data":"bc929a23cf8d1038032aac760cbbd186410de536e009c9bb9f788f8fc8527d9a"} Jan 29 11:54:28 crc kubenswrapper[4593]: I0129 11:54:28.537611 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bddf7" event={"ID":"e3ea983b-a914-4260-9fe2-8fa75d2f1e08","Type":"ContainerStarted","Data":"a8f692cc178e40d6dd2a183f0f930fe61616b4622888a4583e31fe0b88efede4"} Jan 29 11:54:28 crc kubenswrapper[4593]: I0129 11:54:28.540863 4593 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 11:54:31 crc kubenswrapper[4593]: I0129 11:54:31.575796 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bddf7" event={"ID":"e3ea983b-a914-4260-9fe2-8fa75d2f1e08","Type":"ContainerStarted","Data":"0521cd49bca7037f1b806186b9b5d16633c8ca28d994c0657a3e91d697c24158"} Jan 29 11:54:34 crc kubenswrapper[4593]: I0129 11:54:34.075042 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:54:34 crc kubenswrapper[4593]: E0129 11:54:34.075616 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:54:40 crc kubenswrapper[4593]: I0129 11:54:40.664970 4593 generic.go:334] "Generic (PLEG): container finished" podID="e3ea983b-a914-4260-9fe2-8fa75d2f1e08" containerID="0521cd49bca7037f1b806186b9b5d16633c8ca28d994c0657a3e91d697c24158" exitCode=0 Jan 29 11:54:40 crc kubenswrapper[4593]: I0129 11:54:40.665084 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bddf7" event={"ID":"e3ea983b-a914-4260-9fe2-8fa75d2f1e08","Type":"ContainerDied","Data":"0521cd49bca7037f1b806186b9b5d16633c8ca28d994c0657a3e91d697c24158"} Jan 29 11:54:42 crc kubenswrapper[4593]: I0129 11:54:42.684780 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bddf7" event={"ID":"e3ea983b-a914-4260-9fe2-8fa75d2f1e08","Type":"ContainerStarted","Data":"4b024635f7e0a2041ba01fa5476ffe66122cc8f456ae02dcda2d58882e40c464"} Jan 29 11:54:42 crc kubenswrapper[4593]: I0129 11:54:42.714318 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bddf7" podStartSLOduration=2.78948601 podStartE2EDuration="15.71426674s" podCreationTimestamp="2026-01-29 11:54:27 +0000 UTC" firstStartedPulling="2026-01-29 11:54:28.540334287 +0000 UTC m=+3334.413368478" lastFinishedPulling="2026-01-29 11:54:41.465115017 +0000 UTC m=+3347.338149208" observedRunningTime="2026-01-29 11:54:42.712472922 +0000 UTC m=+3348.585507113" watchObservedRunningTime="2026-01-29 11:54:42.71426674 +0000 UTC m=+3348.587300941" Jan 29 11:54:46 crc kubenswrapper[4593]: I0129 11:54:46.075356 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:54:46 crc kubenswrapper[4593]: E0129 11:54:46.075941 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:54:47 crc kubenswrapper[4593]: I0129 11:54:47.640255 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:54:47 crc kubenswrapper[4593]: I0129 11:54:47.640598 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:54:48 crc kubenswrapper[4593]: I0129 11:54:48.705389 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bddf7" podUID="e3ea983b-a914-4260-9fe2-8fa75d2f1e08" containerName="registry-server" probeResult="failure" output=< Jan 29 11:54:48 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:54:48 crc kubenswrapper[4593]: > Jan 29 11:54:58 crc kubenswrapper[4593]: I0129 11:54:58.695256 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bddf7" podUID="e3ea983b-a914-4260-9fe2-8fa75d2f1e08" containerName="registry-server" probeResult="failure" output=< Jan 29 11:54:58 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:54:58 crc kubenswrapper[4593]: > Jan 29 11:55:01 crc kubenswrapper[4593]: I0129 11:55:01.074774 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:55:01 crc kubenswrapper[4593]: E0129 11:55:01.075374 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:55:08 crc kubenswrapper[4593]: I0129 11:55:08.692431 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bddf7" podUID="e3ea983b-a914-4260-9fe2-8fa75d2f1e08" containerName="registry-server" probeResult="failure" output=< Jan 29 11:55:08 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:55:08 crc kubenswrapper[4593]: > Jan 29 11:55:13 crc kubenswrapper[4593]: I0129 11:55:13.076678 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:55:13 crc kubenswrapper[4593]: E0129 11:55:13.077568 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:55:17 crc kubenswrapper[4593]: I0129 11:55:17.692667 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:55:17 crc kubenswrapper[4593]: I0129 11:55:17.749684 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:55:17 crc kubenswrapper[4593]: I0129 11:55:17.946107 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bddf7"] Jan 29 11:55:19 crc kubenswrapper[4593]: I0129 11:55:18.999717 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bddf7" podUID="e3ea983b-a914-4260-9fe2-8fa75d2f1e08" containerName="registry-server" containerID="cri-o://4b024635f7e0a2041ba01fa5476ffe66122cc8f456ae02dcda2d58882e40c464" gracePeriod=2 Jan 29 11:55:20 crc kubenswrapper[4593]: I0129 11:55:20.017471 4593 generic.go:334] "Generic (PLEG): container finished" podID="e3ea983b-a914-4260-9fe2-8fa75d2f1e08" containerID="4b024635f7e0a2041ba01fa5476ffe66122cc8f456ae02dcda2d58882e40c464" exitCode=0 Jan 29 11:55:20 crc kubenswrapper[4593]: I0129 11:55:20.017763 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bddf7" event={"ID":"e3ea983b-a914-4260-9fe2-8fa75d2f1e08","Type":"ContainerDied","Data":"4b024635f7e0a2041ba01fa5476ffe66122cc8f456ae02dcda2d58882e40c464"} Jan 29 11:55:20 crc kubenswrapper[4593]: I0129 11:55:20.334288 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:55:20 crc kubenswrapper[4593]: I0129 11:55:20.491104 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-utilities\") pod \"e3ea983b-a914-4260-9fe2-8fa75d2f1e08\" (UID: \"e3ea983b-a914-4260-9fe2-8fa75d2f1e08\") " Jan 29 11:55:20 crc kubenswrapper[4593]: I0129 11:55:20.491221 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-catalog-content\") pod \"e3ea983b-a914-4260-9fe2-8fa75d2f1e08\" (UID: \"e3ea983b-a914-4260-9fe2-8fa75d2f1e08\") " Jan 29 11:55:20 crc kubenswrapper[4593]: I0129 11:55:20.491269 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9qsk\" (UniqueName: \"kubernetes.io/projected/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-kube-api-access-c9qsk\") pod \"e3ea983b-a914-4260-9fe2-8fa75d2f1e08\" (UID: \"e3ea983b-a914-4260-9fe2-8fa75d2f1e08\") " Jan 29 11:55:20 crc kubenswrapper[4593]: I0129 11:55:20.492374 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-utilities" (OuterVolumeSpecName: "utilities") pod "e3ea983b-a914-4260-9fe2-8fa75d2f1e08" (UID: "e3ea983b-a914-4260-9fe2-8fa75d2f1e08"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:55:20 crc kubenswrapper[4593]: I0129 11:55:20.531947 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-kube-api-access-c9qsk" (OuterVolumeSpecName: "kube-api-access-c9qsk") pod "e3ea983b-a914-4260-9fe2-8fa75d2f1e08" (UID: "e3ea983b-a914-4260-9fe2-8fa75d2f1e08"). InnerVolumeSpecName "kube-api-access-c9qsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:55:20 crc kubenswrapper[4593]: I0129 11:55:20.594530 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:55:20 crc kubenswrapper[4593]: I0129 11:55:20.594566 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9qsk\" (UniqueName: \"kubernetes.io/projected/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-kube-api-access-c9qsk\") on node \"crc\" DevicePath \"\"" Jan 29 11:55:20 crc kubenswrapper[4593]: I0129 11:55:20.635581 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3ea983b-a914-4260-9fe2-8fa75d2f1e08" (UID: "e3ea983b-a914-4260-9fe2-8fa75d2f1e08"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:55:20 crc kubenswrapper[4593]: I0129 11:55:20.699179 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:55:21 crc kubenswrapper[4593]: I0129 11:55:21.028833 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bddf7" event={"ID":"e3ea983b-a914-4260-9fe2-8fa75d2f1e08","Type":"ContainerDied","Data":"a8f692cc178e40d6dd2a183f0f930fe61616b4622888a4583e31fe0b88efede4"} Jan 29 11:55:21 crc kubenswrapper[4593]: I0129 11:55:21.028896 4593 scope.go:117] "RemoveContainer" containerID="4b024635f7e0a2041ba01fa5476ffe66122cc8f456ae02dcda2d58882e40c464" Jan 29 11:55:21 crc kubenswrapper[4593]: I0129 11:55:21.030091 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:55:21 crc kubenswrapper[4593]: I0129 11:55:21.054251 4593 scope.go:117] "RemoveContainer" containerID="0521cd49bca7037f1b806186b9b5d16633c8ca28d994c0657a3e91d697c24158" Jan 29 11:55:21 crc kubenswrapper[4593]: I0129 11:55:21.073779 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bddf7"] Jan 29 11:55:21 crc kubenswrapper[4593]: I0129 11:55:21.103810 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bddf7"] Jan 29 11:55:21 crc kubenswrapper[4593]: I0129 11:55:21.119272 4593 scope.go:117] "RemoveContainer" containerID="bc929a23cf8d1038032aac760cbbd186410de536e009c9bb9f788f8fc8527d9a" Jan 29 11:55:23 crc kubenswrapper[4593]: I0129 11:55:23.086379 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3ea983b-a914-4260-9fe2-8fa75d2f1e08" path="/var/lib/kubelet/pods/e3ea983b-a914-4260-9fe2-8fa75d2f1e08/volumes" Jan 29 11:55:25 crc kubenswrapper[4593]: I0129 11:55:25.082895 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:55:25 crc kubenswrapper[4593]: E0129 11:55:25.083479 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:55:36 crc kubenswrapper[4593]: I0129 11:55:36.075002 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:55:36 crc kubenswrapper[4593]: E0129 11:55:36.075929 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:55:50 crc kubenswrapper[4593]: I0129 11:55:50.075532 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:55:50 crc kubenswrapper[4593]: E0129 11:55:50.076459 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:56:05 crc kubenswrapper[4593]: I0129 11:56:05.082389 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:56:05 crc kubenswrapper[4593]: I0129 11:56:05.425606 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"e0a8bd46a646bdb78b7f5e35dccce37cceaacf8fb67f1dfa0ed9e182128af8b1"} Jan 29 11:58:33 crc kubenswrapper[4593]: I0129 11:58:33.947177 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:58:33 crc kubenswrapper[4593]: I0129 11:58:33.948009 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:59:03 crc kubenswrapper[4593]: I0129 11:59:03.946657 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:59:03 crc kubenswrapper[4593]: I0129 11:59:03.947346 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:59:33 crc kubenswrapper[4593]: I0129 11:59:33.946318 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:59:33 crc kubenswrapper[4593]: I0129 11:59:33.947131 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:59:33 crc kubenswrapper[4593]: I0129 11:59:33.947234 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 11:59:33 crc kubenswrapper[4593]: I0129 11:59:33.948159 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e0a8bd46a646bdb78b7f5e35dccce37cceaacf8fb67f1dfa0ed9e182128af8b1"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:59:33 crc kubenswrapper[4593]: I0129 11:59:33.948235 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://e0a8bd46a646bdb78b7f5e35dccce37cceaacf8fb67f1dfa0ed9e182128af8b1" gracePeriod=600 Jan 29 11:59:34 crc kubenswrapper[4593]: I0129 11:59:34.652832 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="e0a8bd46a646bdb78b7f5e35dccce37cceaacf8fb67f1dfa0ed9e182128af8b1" exitCode=0 Jan 29 11:59:34 crc kubenswrapper[4593]: I0129 11:59:34.652900 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"e0a8bd46a646bdb78b7f5e35dccce37cceaacf8fb67f1dfa0ed9e182128af8b1"} Jan 29 11:59:34 crc kubenswrapper[4593]: I0129 11:59:34.653452 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0"} Jan 29 11:59:34 crc kubenswrapper[4593]: I0129 11:59:34.653520 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.200906 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9"] Jan 29 12:00:00 crc kubenswrapper[4593]: E0129 12:00:00.202067 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3ea983b-a914-4260-9fe2-8fa75d2f1e08" containerName="extract-utilities" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.202100 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ea983b-a914-4260-9fe2-8fa75d2f1e08" containerName="extract-utilities" Jan 29 12:00:00 crc kubenswrapper[4593]: E0129 12:00:00.202133 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3ea983b-a914-4260-9fe2-8fa75d2f1e08" containerName="registry-server" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.202142 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ea983b-a914-4260-9fe2-8fa75d2f1e08" containerName="registry-server" Jan 29 12:00:00 crc kubenswrapper[4593]: E0129 12:00:00.202160 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3ea983b-a914-4260-9fe2-8fa75d2f1e08" containerName="extract-content" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.202168 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ea983b-a914-4260-9fe2-8fa75d2f1e08" containerName="extract-content" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.202398 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3ea983b-a914-4260-9fe2-8fa75d2f1e08" containerName="registry-server" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.203228 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.206228 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.206714 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.246733 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9"] Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.333147 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/88bca612-672a-4f26-8d39-7fde2a190cca-secret-volume\") pod \"collect-profiles-29494800-kdpv9\" (UID: \"88bca612-672a-4f26-8d39-7fde2a190cca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.333242 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88bca612-672a-4f26-8d39-7fde2a190cca-config-volume\") pod \"collect-profiles-29494800-kdpv9\" (UID: \"88bca612-672a-4f26-8d39-7fde2a190cca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.333286 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhpl6\" (UniqueName: \"kubernetes.io/projected/88bca612-672a-4f26-8d39-7fde2a190cca-kube-api-access-nhpl6\") pod \"collect-profiles-29494800-kdpv9\" (UID: \"88bca612-672a-4f26-8d39-7fde2a190cca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.434995 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/88bca612-672a-4f26-8d39-7fde2a190cca-secret-volume\") pod \"collect-profiles-29494800-kdpv9\" (UID: \"88bca612-672a-4f26-8d39-7fde2a190cca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.435061 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88bca612-672a-4f26-8d39-7fde2a190cca-config-volume\") pod \"collect-profiles-29494800-kdpv9\" (UID: \"88bca612-672a-4f26-8d39-7fde2a190cca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.435091 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhpl6\" (UniqueName: \"kubernetes.io/projected/88bca612-672a-4f26-8d39-7fde2a190cca-kube-api-access-nhpl6\") pod \"collect-profiles-29494800-kdpv9\" (UID: \"88bca612-672a-4f26-8d39-7fde2a190cca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.436596 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88bca612-672a-4f26-8d39-7fde2a190cca-config-volume\") pod \"collect-profiles-29494800-kdpv9\" (UID: \"88bca612-672a-4f26-8d39-7fde2a190cca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.475786 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/88bca612-672a-4f26-8d39-7fde2a190cca-secret-volume\") pod \"collect-profiles-29494800-kdpv9\" (UID: \"88bca612-672a-4f26-8d39-7fde2a190cca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.482172 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhpl6\" (UniqueName: \"kubernetes.io/projected/88bca612-672a-4f26-8d39-7fde2a190cca-kube-api-access-nhpl6\") pod \"collect-profiles-29494800-kdpv9\" (UID: \"88bca612-672a-4f26-8d39-7fde2a190cca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.543346 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" Jan 29 12:00:01 crc kubenswrapper[4593]: I0129 12:00:01.220431 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9"] Jan 29 12:00:01 crc kubenswrapper[4593]: I0129 12:00:01.911083 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" event={"ID":"88bca612-672a-4f26-8d39-7fde2a190cca","Type":"ContainerStarted","Data":"c1728aeb51c3b8fb22eb3ef7139e5d2760bf904fa43fbe1defddfdb72c433cb4"} Jan 29 12:00:01 crc kubenswrapper[4593]: I0129 12:00:01.911127 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" event={"ID":"88bca612-672a-4f26-8d39-7fde2a190cca","Type":"ContainerStarted","Data":"b9b1d235b3bafaa96859a822c6375bf05a330d7acc37ead49553ec9eb4fafcd4"} Jan 29 12:00:01 crc kubenswrapper[4593]: I0129 12:00:01.935336 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" podStartSLOduration=1.935301767 podStartE2EDuration="1.935301767s" podCreationTimestamp="2026-01-29 12:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:00:01.927328811 +0000 UTC m=+3667.800363002" watchObservedRunningTime="2026-01-29 12:00:01.935301767 +0000 UTC m=+3667.808335958" Jan 29 12:00:02 crc kubenswrapper[4593]: I0129 12:00:02.921415 4593 generic.go:334] "Generic (PLEG): container finished" podID="88bca612-672a-4f26-8d39-7fde2a190cca" containerID="c1728aeb51c3b8fb22eb3ef7139e5d2760bf904fa43fbe1defddfdb72c433cb4" exitCode=0 Jan 29 12:00:02 crc kubenswrapper[4593]: I0129 12:00:02.921473 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" event={"ID":"88bca612-672a-4f26-8d39-7fde2a190cca","Type":"ContainerDied","Data":"c1728aeb51c3b8fb22eb3ef7139e5d2760bf904fa43fbe1defddfdb72c433cb4"} Jan 29 12:00:04 crc kubenswrapper[4593]: I0129 12:00:04.527163 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" Jan 29 12:00:04 crc kubenswrapper[4593]: I0129 12:00:04.633332 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88bca612-672a-4f26-8d39-7fde2a190cca-config-volume\") pod \"88bca612-672a-4f26-8d39-7fde2a190cca\" (UID: \"88bca612-672a-4f26-8d39-7fde2a190cca\") " Jan 29 12:00:04 crc kubenswrapper[4593]: I0129 12:00:04.633479 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/88bca612-672a-4f26-8d39-7fde2a190cca-secret-volume\") pod \"88bca612-672a-4f26-8d39-7fde2a190cca\" (UID: \"88bca612-672a-4f26-8d39-7fde2a190cca\") " Jan 29 12:00:04 crc kubenswrapper[4593]: I0129 12:00:04.633515 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhpl6\" (UniqueName: \"kubernetes.io/projected/88bca612-672a-4f26-8d39-7fde2a190cca-kube-api-access-nhpl6\") pod \"88bca612-672a-4f26-8d39-7fde2a190cca\" (UID: \"88bca612-672a-4f26-8d39-7fde2a190cca\") " Jan 29 12:00:04 crc kubenswrapper[4593]: I0129 12:00:04.634378 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88bca612-672a-4f26-8d39-7fde2a190cca-config-volume" (OuterVolumeSpecName: "config-volume") pod "88bca612-672a-4f26-8d39-7fde2a190cca" (UID: "88bca612-672a-4f26-8d39-7fde2a190cca"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:00:04 crc kubenswrapper[4593]: I0129 12:00:04.642251 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88bca612-672a-4f26-8d39-7fde2a190cca-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "88bca612-672a-4f26-8d39-7fde2a190cca" (UID: "88bca612-672a-4f26-8d39-7fde2a190cca"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:00:04 crc kubenswrapper[4593]: I0129 12:00:04.642946 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88bca612-672a-4f26-8d39-7fde2a190cca-kube-api-access-nhpl6" (OuterVolumeSpecName: "kube-api-access-nhpl6") pod "88bca612-672a-4f26-8d39-7fde2a190cca" (UID: "88bca612-672a-4f26-8d39-7fde2a190cca"). InnerVolumeSpecName "kube-api-access-nhpl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:00:04 crc kubenswrapper[4593]: I0129 12:00:04.736473 4593 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88bca612-672a-4f26-8d39-7fde2a190cca-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:00:04 crc kubenswrapper[4593]: I0129 12:00:04.736874 4593 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/88bca612-672a-4f26-8d39-7fde2a190cca-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:00:04 crc kubenswrapper[4593]: I0129 12:00:04.736919 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhpl6\" (UniqueName: \"kubernetes.io/projected/88bca612-672a-4f26-8d39-7fde2a190cca-kube-api-access-nhpl6\") on node \"crc\" DevicePath \"\"" Jan 29 12:00:04 crc kubenswrapper[4593]: I0129 12:00:04.945367 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" event={"ID":"88bca612-672a-4f26-8d39-7fde2a190cca","Type":"ContainerDied","Data":"b9b1d235b3bafaa96859a822c6375bf05a330d7acc37ead49553ec9eb4fafcd4"} Jan 29 12:00:04 crc kubenswrapper[4593]: I0129 12:00:04.945455 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9b1d235b3bafaa96859a822c6375bf05a330d7acc37ead49553ec9eb4fafcd4" Jan 29 12:00:04 crc kubenswrapper[4593]: I0129 12:00:04.945548 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" Jan 29 12:00:05 crc kubenswrapper[4593]: I0129 12:00:05.622250 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8"] Jan 29 12:00:05 crc kubenswrapper[4593]: I0129 12:00:05.632459 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8"] Jan 29 12:00:07 crc kubenswrapper[4593]: I0129 12:00:07.087055 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d624d92-85b0-48dc-94f4-047ac84aaa0c" path="/var/lib/kubelet/pods/8d624d92-85b0-48dc-94f4-047ac84aaa0c/volumes" Jan 29 12:00:16 crc kubenswrapper[4593]: I0129 12:00:16.406622 4593 scope.go:117] "RemoveContainer" containerID="c821139e8b0317636f7e45a909cbff9ea156a76bb671f91a36836e985d04e36c" Jan 29 12:00:52 crc kubenswrapper[4593]: I0129 12:00:52.003767 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nmvmp"] Jan 29 12:00:52 crc kubenswrapper[4593]: E0129 12:00:52.004889 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88bca612-672a-4f26-8d39-7fde2a190cca" containerName="collect-profiles" Jan 29 12:00:52 crc kubenswrapper[4593]: I0129 12:00:52.004908 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="88bca612-672a-4f26-8d39-7fde2a190cca" containerName="collect-profiles" Jan 29 12:00:52 crc kubenswrapper[4593]: I0129 12:00:52.005167 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="88bca612-672a-4f26-8d39-7fde2a190cca" containerName="collect-profiles" Jan 29 12:00:52 crc kubenswrapper[4593]: I0129 12:00:52.006912 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:00:52 crc kubenswrapper[4593]: I0129 12:00:52.015119 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nmvmp"] Jan 29 12:00:52 crc kubenswrapper[4593]: I0129 12:00:52.151652 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd27r\" (UniqueName: \"kubernetes.io/projected/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-kube-api-access-qd27r\") pod \"certified-operators-nmvmp\" (UID: \"fd4958b5-6b8b-4701-854c-5fffd4db0e4c\") " pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:00:52 crc kubenswrapper[4593]: I0129 12:00:52.152354 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-catalog-content\") pod \"certified-operators-nmvmp\" (UID: \"fd4958b5-6b8b-4701-854c-5fffd4db0e4c\") " pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:00:52 crc kubenswrapper[4593]: I0129 12:00:52.152571 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-utilities\") pod \"certified-operators-nmvmp\" (UID: \"fd4958b5-6b8b-4701-854c-5fffd4db0e4c\") " pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:00:52 crc kubenswrapper[4593]: I0129 12:00:52.254606 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd27r\" (UniqueName: \"kubernetes.io/projected/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-kube-api-access-qd27r\") pod \"certified-operators-nmvmp\" (UID: \"fd4958b5-6b8b-4701-854c-5fffd4db0e4c\") " pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:00:52 crc kubenswrapper[4593]: I0129 12:00:52.254698 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-catalog-content\") pod \"certified-operators-nmvmp\" (UID: \"fd4958b5-6b8b-4701-854c-5fffd4db0e4c\") " pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:00:52 crc kubenswrapper[4593]: I0129 12:00:52.254778 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-utilities\") pod \"certified-operators-nmvmp\" (UID: \"fd4958b5-6b8b-4701-854c-5fffd4db0e4c\") " pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:00:52 crc kubenswrapper[4593]: I0129 12:00:52.255346 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-catalog-content\") pod \"certified-operators-nmvmp\" (UID: \"fd4958b5-6b8b-4701-854c-5fffd4db0e4c\") " pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:00:52 crc kubenswrapper[4593]: I0129 12:00:52.255418 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-utilities\") pod \"certified-operators-nmvmp\" (UID: \"fd4958b5-6b8b-4701-854c-5fffd4db0e4c\") " pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:00:52 crc kubenswrapper[4593]: I0129 12:00:52.277151 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd27r\" (UniqueName: \"kubernetes.io/projected/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-kube-api-access-qd27r\") pod \"certified-operators-nmvmp\" (UID: \"fd4958b5-6b8b-4701-854c-5fffd4db0e4c\") " pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:00:52 crc kubenswrapper[4593]: I0129 12:00:52.355775 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:00:52 crc kubenswrapper[4593]: I0129 12:00:52.940984 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nmvmp"] Jan 29 12:00:53 crc kubenswrapper[4593]: I0129 12:00:53.409509 4593 generic.go:334] "Generic (PLEG): container finished" podID="fd4958b5-6b8b-4701-854c-5fffd4db0e4c" containerID="ebc6457b2420fee9f914c3931d6ac7886197195125ce55b8488658480f1e8fae" exitCode=0 Jan 29 12:00:53 crc kubenswrapper[4593]: I0129 12:00:53.409835 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmvmp" event={"ID":"fd4958b5-6b8b-4701-854c-5fffd4db0e4c","Type":"ContainerDied","Data":"ebc6457b2420fee9f914c3931d6ac7886197195125ce55b8488658480f1e8fae"} Jan 29 12:00:53 crc kubenswrapper[4593]: I0129 12:00:53.410063 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmvmp" event={"ID":"fd4958b5-6b8b-4701-854c-5fffd4db0e4c","Type":"ContainerStarted","Data":"57a10fdf5b721a0b423550e25c12e2cc02e30dd94c94225a8018e4ccd80601d0"} Jan 29 12:00:53 crc kubenswrapper[4593]: I0129 12:00:53.416719 4593 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 12:00:56 crc kubenswrapper[4593]: I0129 12:00:56.440973 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmvmp" event={"ID":"fd4958b5-6b8b-4701-854c-5fffd4db0e4c","Type":"ContainerStarted","Data":"b5bb6fcab278c8884cb954e49a60be00084a42dcef19ad25b4a6ea7d8710ceb1"} Jan 29 12:01:00 crc kubenswrapper[4593]: I0129 12:01:00.184689 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29494801-8jgxn"] Jan 29 12:01:00 crc kubenswrapper[4593]: I0129 12:01:00.186236 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29494801-8jgxn" Jan 29 12:01:00 crc kubenswrapper[4593]: I0129 12:01:00.197241 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29494801-8jgxn"] Jan 29 12:01:00 crc kubenswrapper[4593]: I0129 12:01:00.198202 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-combined-ca-bundle\") pod \"keystone-cron-29494801-8jgxn\" (UID: \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\") " pod="openstack/keystone-cron-29494801-8jgxn" Jan 29 12:01:00 crc kubenswrapper[4593]: I0129 12:01:00.198272 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-config-data\") pod \"keystone-cron-29494801-8jgxn\" (UID: \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\") " pod="openstack/keystone-cron-29494801-8jgxn" Jan 29 12:01:00 crc kubenswrapper[4593]: I0129 12:01:00.198397 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxj24\" (UniqueName: \"kubernetes.io/projected/f7d47080-9737-4b86-9e40-a6c6bf7f1709-kube-api-access-cxj24\") pod \"keystone-cron-29494801-8jgxn\" (UID: \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\") " pod="openstack/keystone-cron-29494801-8jgxn" Jan 29 12:01:00 crc kubenswrapper[4593]: I0129 12:01:00.198450 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-fernet-keys\") pod \"keystone-cron-29494801-8jgxn\" (UID: \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\") " pod="openstack/keystone-cron-29494801-8jgxn" Jan 29 12:01:00 crc kubenswrapper[4593]: I0129 12:01:00.317976 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxj24\" (UniqueName: \"kubernetes.io/projected/f7d47080-9737-4b86-9e40-a6c6bf7f1709-kube-api-access-cxj24\") pod \"keystone-cron-29494801-8jgxn\" (UID: \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\") " pod="openstack/keystone-cron-29494801-8jgxn" Jan 29 12:01:00 crc kubenswrapper[4593]: I0129 12:01:00.318086 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-fernet-keys\") pod \"keystone-cron-29494801-8jgxn\" (UID: \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\") " pod="openstack/keystone-cron-29494801-8jgxn" Jan 29 12:01:00 crc kubenswrapper[4593]: I0129 12:01:00.318249 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-combined-ca-bundle\") pod \"keystone-cron-29494801-8jgxn\" (UID: \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\") " pod="openstack/keystone-cron-29494801-8jgxn" Jan 29 12:01:00 crc kubenswrapper[4593]: I0129 12:01:00.318336 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-config-data\") pod \"keystone-cron-29494801-8jgxn\" (UID: \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\") " pod="openstack/keystone-cron-29494801-8jgxn" Jan 29 12:01:00 crc kubenswrapper[4593]: I0129 12:01:00.331989 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-fernet-keys\") pod \"keystone-cron-29494801-8jgxn\" (UID: \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\") " pod="openstack/keystone-cron-29494801-8jgxn" Jan 29 12:01:00 crc kubenswrapper[4593]: I0129 12:01:00.333575 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-config-data\") pod \"keystone-cron-29494801-8jgxn\" (UID: \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\") " pod="openstack/keystone-cron-29494801-8jgxn" Jan 29 12:01:00 crc kubenswrapper[4593]: I0129 12:01:00.350814 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-combined-ca-bundle\") pod \"keystone-cron-29494801-8jgxn\" (UID: \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\") " pod="openstack/keystone-cron-29494801-8jgxn" Jan 29 12:01:00 crc kubenswrapper[4593]: I0129 12:01:00.353908 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxj24\" (UniqueName: \"kubernetes.io/projected/f7d47080-9737-4b86-9e40-a6c6bf7f1709-kube-api-access-cxj24\") pod \"keystone-cron-29494801-8jgxn\" (UID: \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\") " pod="openstack/keystone-cron-29494801-8jgxn" Jan 29 12:01:00 crc kubenswrapper[4593]: I0129 12:01:00.506312 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29494801-8jgxn" Jan 29 12:01:01 crc kubenswrapper[4593]: I0129 12:01:01.124799 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29494801-8jgxn"] Jan 29 12:01:01 crc kubenswrapper[4593]: I0129 12:01:01.555076 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29494801-8jgxn" event={"ID":"f7d47080-9737-4b86-9e40-a6c6bf7f1709","Type":"ContainerStarted","Data":"d5dcebdff1872143a7baa5b2f3daf0b82ebdcad3fdc1e3124fd8cbb11c7b3339"} Jan 29 12:01:02 crc kubenswrapper[4593]: I0129 12:01:02.567182 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29494801-8jgxn" event={"ID":"f7d47080-9737-4b86-9e40-a6c6bf7f1709","Type":"ContainerStarted","Data":"c4f23aad4e75d53e9867238c0a4577c6262c2408292cb4cc450a9a2b02c73f78"} Jan 29 12:01:02 crc kubenswrapper[4593]: I0129 12:01:02.572171 4593 generic.go:334] "Generic (PLEG): container finished" podID="fd4958b5-6b8b-4701-854c-5fffd4db0e4c" containerID="b5bb6fcab278c8884cb954e49a60be00084a42dcef19ad25b4a6ea7d8710ceb1" exitCode=0 Jan 29 12:01:02 crc kubenswrapper[4593]: I0129 12:01:02.572253 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmvmp" event={"ID":"fd4958b5-6b8b-4701-854c-5fffd4db0e4c","Type":"ContainerDied","Data":"b5bb6fcab278c8884cb954e49a60be00084a42dcef19ad25b4a6ea7d8710ceb1"} Jan 29 12:01:02 crc kubenswrapper[4593]: I0129 12:01:02.589901 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29494801-8jgxn" podStartSLOduration=2.589876692 podStartE2EDuration="2.589876692s" podCreationTimestamp="2026-01-29 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:01:02.588286489 +0000 UTC m=+3728.461320680" watchObservedRunningTime="2026-01-29 12:01:02.589876692 +0000 UTC m=+3728.462910893" Jan 29 12:01:03 crc kubenswrapper[4593]: I0129 12:01:03.588394 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmvmp" event={"ID":"fd4958b5-6b8b-4701-854c-5fffd4db0e4c","Type":"ContainerStarted","Data":"f6c9ab006bfb5d3794f55ceff75f7522c034bb42f3cc7f70c3559e9b852871f3"} Jan 29 12:01:03 crc kubenswrapper[4593]: I0129 12:01:03.625837 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nmvmp" podStartSLOduration=2.919507389 podStartE2EDuration="12.625812366s" podCreationTimestamp="2026-01-29 12:00:51 +0000 UTC" firstStartedPulling="2026-01-29 12:00:53.416302106 +0000 UTC m=+3719.289336297" lastFinishedPulling="2026-01-29 12:01:03.122607073 +0000 UTC m=+3728.995641274" observedRunningTime="2026-01-29 12:01:03.623963736 +0000 UTC m=+3729.496997957" watchObservedRunningTime="2026-01-29 12:01:03.625812366 +0000 UTC m=+3729.498846557" Jan 29 12:01:04 crc kubenswrapper[4593]: I0129 12:01:04.587916 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-69vh6"] Jan 29 12:01:04 crc kubenswrapper[4593]: I0129 12:01:04.590540 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:04 crc kubenswrapper[4593]: I0129 12:01:04.612074 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-69vh6"] Jan 29 12:01:04 crc kubenswrapper[4593]: I0129 12:01:04.665499 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-utilities\") pod \"community-operators-69vh6\" (UID: \"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a\") " pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:04 crc kubenswrapper[4593]: I0129 12:01:04.665894 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8pmv\" (UniqueName: \"kubernetes.io/projected/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-kube-api-access-z8pmv\") pod \"community-operators-69vh6\" (UID: \"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a\") " pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:04 crc kubenswrapper[4593]: I0129 12:01:04.666096 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-catalog-content\") pod \"community-operators-69vh6\" (UID: \"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a\") " pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:04 crc kubenswrapper[4593]: I0129 12:01:04.768081 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-utilities\") pod \"community-operators-69vh6\" (UID: \"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a\") " pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:04 crc kubenswrapper[4593]: I0129 12:01:04.768651 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8pmv\" (UniqueName: \"kubernetes.io/projected/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-kube-api-access-z8pmv\") pod \"community-operators-69vh6\" (UID: \"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a\") " pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:04 crc kubenswrapper[4593]: I0129 12:01:04.768740 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-catalog-content\") pod \"community-operators-69vh6\" (UID: \"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a\") " pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:04 crc kubenswrapper[4593]: I0129 12:01:04.768738 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-utilities\") pod \"community-operators-69vh6\" (UID: \"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a\") " pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:04 crc kubenswrapper[4593]: I0129 12:01:04.769214 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-catalog-content\") pod \"community-operators-69vh6\" (UID: \"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a\") " pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:04 crc kubenswrapper[4593]: I0129 12:01:04.808947 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8pmv\" (UniqueName: \"kubernetes.io/projected/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-kube-api-access-z8pmv\") pod \"community-operators-69vh6\" (UID: \"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a\") " pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:04 crc kubenswrapper[4593]: I0129 12:01:04.916132 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:05 crc kubenswrapper[4593]: I0129 12:01:05.612887 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-69vh6"] Jan 29 12:01:06 crc kubenswrapper[4593]: I0129 12:01:06.628568 4593 generic.go:334] "Generic (PLEG): container finished" podID="1c76ee6e-190d-4dcf-9aa4-62557c0ee07a" containerID="dbd38eb8e7e4acf4e95c3c0522d3597248765922ad27202f1c27e877d32b2c0b" exitCode=0 Jan 29 12:01:06 crc kubenswrapper[4593]: I0129 12:01:06.628687 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69vh6" event={"ID":"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a","Type":"ContainerDied","Data":"dbd38eb8e7e4acf4e95c3c0522d3597248765922ad27202f1c27e877d32b2c0b"} Jan 29 12:01:06 crc kubenswrapper[4593]: I0129 12:01:06.629108 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69vh6" event={"ID":"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a","Type":"ContainerStarted","Data":"21eb6256f05b21a81f3c529ef18a59cfba08db30ea0577b58f3b450ba62f0f3f"} Jan 29 12:01:08 crc kubenswrapper[4593]: I0129 12:01:08.649554 4593 generic.go:334] "Generic (PLEG): container finished" podID="f7d47080-9737-4b86-9e40-a6c6bf7f1709" containerID="c4f23aad4e75d53e9867238c0a4577c6262c2408292cb4cc450a9a2b02c73f78" exitCode=0 Jan 29 12:01:08 crc kubenswrapper[4593]: I0129 12:01:08.649671 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29494801-8jgxn" event={"ID":"f7d47080-9737-4b86-9e40-a6c6bf7f1709","Type":"ContainerDied","Data":"c4f23aad4e75d53e9867238c0a4577c6262c2408292cb4cc450a9a2b02c73f78"} Jan 29 12:01:10 crc kubenswrapper[4593]: I0129 12:01:10.204824 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29494801-8jgxn" Jan 29 12:01:10 crc kubenswrapper[4593]: I0129 12:01:10.389240 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxj24\" (UniqueName: \"kubernetes.io/projected/f7d47080-9737-4b86-9e40-a6c6bf7f1709-kube-api-access-cxj24\") pod \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\" (UID: \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\") " Jan 29 12:01:10 crc kubenswrapper[4593]: I0129 12:01:10.389312 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-combined-ca-bundle\") pod \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\" (UID: \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\") " Jan 29 12:01:10 crc kubenswrapper[4593]: I0129 12:01:10.389457 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-config-data\") pod \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\" (UID: \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\") " Jan 29 12:01:10 crc kubenswrapper[4593]: I0129 12:01:10.389544 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-fernet-keys\") pod \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\" (UID: \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\") " Jan 29 12:01:10 crc kubenswrapper[4593]: I0129 12:01:10.675985 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29494801-8jgxn" Jan 29 12:01:10 crc kubenswrapper[4593]: I0129 12:01:10.677172 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29494801-8jgxn" event={"ID":"f7d47080-9737-4b86-9e40-a6c6bf7f1709","Type":"ContainerDied","Data":"d5dcebdff1872143a7baa5b2f3daf0b82ebdcad3fdc1e3124fd8cbb11c7b3339"} Jan 29 12:01:10 crc kubenswrapper[4593]: I0129 12:01:10.677246 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5dcebdff1872143a7baa5b2f3daf0b82ebdcad3fdc1e3124fd8cbb11c7b3339" Jan 29 12:01:10 crc kubenswrapper[4593]: I0129 12:01:10.867594 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7d47080-9737-4b86-9e40-a6c6bf7f1709-kube-api-access-cxj24" (OuterVolumeSpecName: "kube-api-access-cxj24") pod "f7d47080-9737-4b86-9e40-a6c6bf7f1709" (UID: "f7d47080-9737-4b86-9e40-a6c6bf7f1709"). InnerVolumeSpecName "kube-api-access-cxj24". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:01:10 crc kubenswrapper[4593]: I0129 12:01:10.876544 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f7d47080-9737-4b86-9e40-a6c6bf7f1709" (UID: "f7d47080-9737-4b86-9e40-a6c6bf7f1709"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:01:10 crc kubenswrapper[4593]: I0129 12:01:10.900388 4593 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 12:01:10 crc kubenswrapper[4593]: I0129 12:01:10.900422 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxj24\" (UniqueName: \"kubernetes.io/projected/f7d47080-9737-4b86-9e40-a6c6bf7f1709-kube-api-access-cxj24\") on node \"crc\" DevicePath \"\"" Jan 29 12:01:10 crc kubenswrapper[4593]: I0129 12:01:10.958785 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7d47080-9737-4b86-9e40-a6c6bf7f1709" (UID: "f7d47080-9737-4b86-9e40-a6c6bf7f1709"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:01:10 crc kubenswrapper[4593]: I0129 12:01:10.959045 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-config-data" (OuterVolumeSpecName: "config-data") pod "f7d47080-9737-4b86-9e40-a6c6bf7f1709" (UID: "f7d47080-9737-4b86-9e40-a6c6bf7f1709"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:01:11 crc kubenswrapper[4593]: I0129 12:01:11.002566 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 12:01:11 crc kubenswrapper[4593]: I0129 12:01:11.002603 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 12:01:11 crc kubenswrapper[4593]: I0129 12:01:11.687294 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69vh6" event={"ID":"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a","Type":"ContainerStarted","Data":"e64072a85fcee74e04835683a420cbde4a1984942a0cfff52032e3eb93b67c5e"} Jan 29 12:01:12 crc kubenswrapper[4593]: I0129 12:01:12.356726 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:01:12 crc kubenswrapper[4593]: I0129 12:01:12.356780 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:01:12 crc kubenswrapper[4593]: I0129 12:01:12.411158 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:01:13 crc kubenswrapper[4593]: I0129 12:01:13.010572 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:01:13 crc kubenswrapper[4593]: I0129 12:01:13.071510 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nmvmp"] Jan 29 12:01:13 crc kubenswrapper[4593]: I0129 12:01:13.723352 4593 generic.go:334] "Generic (PLEG): container finished" podID="1c76ee6e-190d-4dcf-9aa4-62557c0ee07a" containerID="e64072a85fcee74e04835683a420cbde4a1984942a0cfff52032e3eb93b67c5e" exitCode=0 Jan 29 12:01:13 crc kubenswrapper[4593]: I0129 12:01:13.723380 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69vh6" event={"ID":"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a","Type":"ContainerDied","Data":"e64072a85fcee74e04835683a420cbde4a1984942a0cfff52032e3eb93b67c5e"} Jan 29 12:01:14 crc kubenswrapper[4593]: I0129 12:01:14.735517 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69vh6" event={"ID":"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a","Type":"ContainerStarted","Data":"bf5ccb85076ad56cb114d65b43dfb0cde9c800efbd2209ba3f59888cc0edaa6f"} Jan 29 12:01:14 crc kubenswrapper[4593]: I0129 12:01:14.735682 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nmvmp" podUID="fd4958b5-6b8b-4701-854c-5fffd4db0e4c" containerName="registry-server" containerID="cri-o://f6c9ab006bfb5d3794f55ceff75f7522c034bb42f3cc7f70c3559e9b852871f3" gracePeriod=2 Jan 29 12:01:14 crc kubenswrapper[4593]: I0129 12:01:14.917003 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:14 crc kubenswrapper[4593]: I0129 12:01:14.917067 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.447353 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.470025 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-catalog-content\") pod \"fd4958b5-6b8b-4701-854c-5fffd4db0e4c\" (UID: \"fd4958b5-6b8b-4701-854c-5fffd4db0e4c\") " Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.470119 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-utilities\") pod \"fd4958b5-6b8b-4701-854c-5fffd4db0e4c\" (UID: \"fd4958b5-6b8b-4701-854c-5fffd4db0e4c\") " Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.470179 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd27r\" (UniqueName: \"kubernetes.io/projected/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-kube-api-access-qd27r\") pod \"fd4958b5-6b8b-4701-854c-5fffd4db0e4c\" (UID: \"fd4958b5-6b8b-4701-854c-5fffd4db0e4c\") " Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.471072 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-utilities" (OuterVolumeSpecName: "utilities") pod "fd4958b5-6b8b-4701-854c-5fffd4db0e4c" (UID: "fd4958b5-6b8b-4701-854c-5fffd4db0e4c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.476892 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-kube-api-access-qd27r" (OuterVolumeSpecName: "kube-api-access-qd27r") pod "fd4958b5-6b8b-4701-854c-5fffd4db0e4c" (UID: "fd4958b5-6b8b-4701-854c-5fffd4db0e4c"). InnerVolumeSpecName "kube-api-access-qd27r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.484913 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-69vh6" podStartSLOduration=3.9887837360000002 podStartE2EDuration="11.484888011s" podCreationTimestamp="2026-01-29 12:01:04 +0000 UTC" firstStartedPulling="2026-01-29 12:01:06.63067129 +0000 UTC m=+3732.503705481" lastFinishedPulling="2026-01-29 12:01:14.126775565 +0000 UTC m=+3739.999809756" observedRunningTime="2026-01-29 12:01:14.771817146 +0000 UTC m=+3740.644851337" watchObservedRunningTime="2026-01-29 12:01:15.484888011 +0000 UTC m=+3741.357922222" Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.532318 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd4958b5-6b8b-4701-854c-5fffd4db0e4c" (UID: "fd4958b5-6b8b-4701-854c-5fffd4db0e4c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.574019 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd27r\" (UniqueName: \"kubernetes.io/projected/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-kube-api-access-qd27r\") on node \"crc\" DevicePath \"\"" Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.574060 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.574071 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.760484 4593 generic.go:334] "Generic (PLEG): container finished" podID="fd4958b5-6b8b-4701-854c-5fffd4db0e4c" containerID="f6c9ab006bfb5d3794f55ceff75f7522c034bb42f3cc7f70c3559e9b852871f3" exitCode=0 Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.771003 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmvmp" event={"ID":"fd4958b5-6b8b-4701-854c-5fffd4db0e4c","Type":"ContainerDied","Data":"f6c9ab006bfb5d3794f55ceff75f7522c034bb42f3cc7f70c3559e9b852871f3"} Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.771233 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmvmp" event={"ID":"fd4958b5-6b8b-4701-854c-5fffd4db0e4c","Type":"ContainerDied","Data":"57a10fdf5b721a0b423550e25c12e2cc02e30dd94c94225a8018e4ccd80601d0"} Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.771300 4593 scope.go:117] "RemoveContainer" containerID="f6c9ab006bfb5d3794f55ceff75f7522c034bb42f3cc7f70c3559e9b852871f3" Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.892063 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.951290 4593 scope.go:117] "RemoveContainer" containerID="b5bb6fcab278c8884cb954e49a60be00084a42dcef19ad25b4a6ea7d8710ceb1" Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.980248 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-69vh6" podUID="1c76ee6e-190d-4dcf-9aa4-62557c0ee07a" containerName="registry-server" probeResult="failure" output=< Jan 29 12:01:15 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:01:15 crc kubenswrapper[4593]: > Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.987708 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nmvmp"] Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.995863 4593 scope.go:117] "RemoveContainer" containerID="ebc6457b2420fee9f914c3931d6ac7886197195125ce55b8488658480f1e8fae" Jan 29 12:01:16 crc kubenswrapper[4593]: I0129 12:01:16.008518 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nmvmp"] Jan 29 12:01:16 crc kubenswrapper[4593]: I0129 12:01:16.043760 4593 scope.go:117] "RemoveContainer" containerID="f6c9ab006bfb5d3794f55ceff75f7522c034bb42f3cc7f70c3559e9b852871f3" Jan 29 12:01:16 crc kubenswrapper[4593]: E0129 12:01:16.044552 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6c9ab006bfb5d3794f55ceff75f7522c034bb42f3cc7f70c3559e9b852871f3\": container with ID starting with f6c9ab006bfb5d3794f55ceff75f7522c034bb42f3cc7f70c3559e9b852871f3 not found: ID does not exist" containerID="f6c9ab006bfb5d3794f55ceff75f7522c034bb42f3cc7f70c3559e9b852871f3" Jan 29 12:01:16 crc kubenswrapper[4593]: I0129 12:01:16.044600 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6c9ab006bfb5d3794f55ceff75f7522c034bb42f3cc7f70c3559e9b852871f3"} err="failed to get container status \"f6c9ab006bfb5d3794f55ceff75f7522c034bb42f3cc7f70c3559e9b852871f3\": rpc error: code = NotFound desc = could not find container \"f6c9ab006bfb5d3794f55ceff75f7522c034bb42f3cc7f70c3559e9b852871f3\": container with ID starting with f6c9ab006bfb5d3794f55ceff75f7522c034bb42f3cc7f70c3559e9b852871f3 not found: ID does not exist" Jan 29 12:01:16 crc kubenswrapper[4593]: I0129 12:01:16.044642 4593 scope.go:117] "RemoveContainer" containerID="b5bb6fcab278c8884cb954e49a60be00084a42dcef19ad25b4a6ea7d8710ceb1" Jan 29 12:01:16 crc kubenswrapper[4593]: E0129 12:01:16.047077 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5bb6fcab278c8884cb954e49a60be00084a42dcef19ad25b4a6ea7d8710ceb1\": container with ID starting with b5bb6fcab278c8884cb954e49a60be00084a42dcef19ad25b4a6ea7d8710ceb1 not found: ID does not exist" containerID="b5bb6fcab278c8884cb954e49a60be00084a42dcef19ad25b4a6ea7d8710ceb1" Jan 29 12:01:16 crc kubenswrapper[4593]: I0129 12:01:16.047151 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5bb6fcab278c8884cb954e49a60be00084a42dcef19ad25b4a6ea7d8710ceb1"} err="failed to get container status \"b5bb6fcab278c8884cb954e49a60be00084a42dcef19ad25b4a6ea7d8710ceb1\": rpc error: code = NotFound desc = could not find container \"b5bb6fcab278c8884cb954e49a60be00084a42dcef19ad25b4a6ea7d8710ceb1\": container with ID starting with b5bb6fcab278c8884cb954e49a60be00084a42dcef19ad25b4a6ea7d8710ceb1 not found: ID does not exist" Jan 29 12:01:16 crc kubenswrapper[4593]: I0129 12:01:16.047187 4593 scope.go:117] "RemoveContainer" containerID="ebc6457b2420fee9f914c3931d6ac7886197195125ce55b8488658480f1e8fae" Jan 29 12:01:16 crc kubenswrapper[4593]: E0129 12:01:16.050836 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebc6457b2420fee9f914c3931d6ac7886197195125ce55b8488658480f1e8fae\": container with ID starting with ebc6457b2420fee9f914c3931d6ac7886197195125ce55b8488658480f1e8fae not found: ID does not exist" containerID="ebc6457b2420fee9f914c3931d6ac7886197195125ce55b8488658480f1e8fae" Jan 29 12:01:16 crc kubenswrapper[4593]: I0129 12:01:16.051076 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebc6457b2420fee9f914c3931d6ac7886197195125ce55b8488658480f1e8fae"} err="failed to get container status \"ebc6457b2420fee9f914c3931d6ac7886197195125ce55b8488658480f1e8fae\": rpc error: code = NotFound desc = could not find container \"ebc6457b2420fee9f914c3931d6ac7886197195125ce55b8488658480f1e8fae\": container with ID starting with ebc6457b2420fee9f914c3931d6ac7886197195125ce55b8488658480f1e8fae not found: ID does not exist" Jan 29 12:01:17 crc kubenswrapper[4593]: I0129 12:01:17.090195 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd4958b5-6b8b-4701-854c-5fffd4db0e4c" path="/var/lib/kubelet/pods/fd4958b5-6b8b-4701-854c-5fffd4db0e4c/volumes" Jan 29 12:01:24 crc kubenswrapper[4593]: I0129 12:01:24.969477 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:25 crc kubenswrapper[4593]: I0129 12:01:25.028246 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:25 crc kubenswrapper[4593]: I0129 12:01:25.208250 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-69vh6"] Jan 29 12:01:26 crc kubenswrapper[4593]: I0129 12:01:26.901410 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-69vh6" podUID="1c76ee6e-190d-4dcf-9aa4-62557c0ee07a" containerName="registry-server" containerID="cri-o://bf5ccb85076ad56cb114d65b43dfb0cde9c800efbd2209ba3f59888cc0edaa6f" gracePeriod=2 Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.484824 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.651114 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-utilities\") pod \"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a\" (UID: \"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a\") " Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.651212 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-catalog-content\") pod \"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a\" (UID: \"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a\") " Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.651462 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8pmv\" (UniqueName: \"kubernetes.io/projected/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-kube-api-access-z8pmv\") pod \"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a\" (UID: \"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a\") " Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.652648 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-utilities" (OuterVolumeSpecName: "utilities") pod "1c76ee6e-190d-4dcf-9aa4-62557c0ee07a" (UID: "1c76ee6e-190d-4dcf-9aa4-62557c0ee07a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.673654 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.692850 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-kube-api-access-z8pmv" (OuterVolumeSpecName: "kube-api-access-z8pmv") pod "1c76ee6e-190d-4dcf-9aa4-62557c0ee07a" (UID: "1c76ee6e-190d-4dcf-9aa4-62557c0ee07a"). InnerVolumeSpecName "kube-api-access-z8pmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.746345 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c76ee6e-190d-4dcf-9aa4-62557c0ee07a" (UID: "1c76ee6e-190d-4dcf-9aa4-62557c0ee07a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.746896 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-smnz5"] Jan 29 12:01:27 crc kubenswrapper[4593]: E0129 12:01:27.747341 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd4958b5-6b8b-4701-854c-5fffd4db0e4c" containerName="extract-utilities" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.747359 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd4958b5-6b8b-4701-854c-5fffd4db0e4c" containerName="extract-utilities" Jan 29 12:01:27 crc kubenswrapper[4593]: E0129 12:01:27.747380 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd4958b5-6b8b-4701-854c-5fffd4db0e4c" containerName="extract-content" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.747390 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd4958b5-6b8b-4701-854c-5fffd4db0e4c" containerName="extract-content" Jan 29 12:01:27 crc kubenswrapper[4593]: E0129 12:01:27.747404 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7d47080-9737-4b86-9e40-a6c6bf7f1709" containerName="keystone-cron" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.747409 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7d47080-9737-4b86-9e40-a6c6bf7f1709" containerName="keystone-cron" Jan 29 12:01:27 crc kubenswrapper[4593]: E0129 12:01:27.747417 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c76ee6e-190d-4dcf-9aa4-62557c0ee07a" containerName="extract-content" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.747423 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c76ee6e-190d-4dcf-9aa4-62557c0ee07a" containerName="extract-content" Jan 29 12:01:27 crc kubenswrapper[4593]: E0129 12:01:27.747435 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c76ee6e-190d-4dcf-9aa4-62557c0ee07a" containerName="registry-server" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.747441 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c76ee6e-190d-4dcf-9aa4-62557c0ee07a" containerName="registry-server" Jan 29 12:01:27 crc kubenswrapper[4593]: E0129 12:01:27.747463 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd4958b5-6b8b-4701-854c-5fffd4db0e4c" containerName="registry-server" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.747470 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd4958b5-6b8b-4701-854c-5fffd4db0e4c" containerName="registry-server" Jan 29 12:01:27 crc kubenswrapper[4593]: E0129 12:01:27.747482 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c76ee6e-190d-4dcf-9aa4-62557c0ee07a" containerName="extract-utilities" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.747488 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c76ee6e-190d-4dcf-9aa4-62557c0ee07a" containerName="extract-utilities" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.747700 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd4958b5-6b8b-4701-854c-5fffd4db0e4c" containerName="registry-server" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.747717 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c76ee6e-190d-4dcf-9aa4-62557c0ee07a" containerName="registry-server" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.747729 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7d47080-9737-4b86-9e40-a6c6bf7f1709" containerName="keystone-cron" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.749779 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.758969 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-smnz5"] Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.777321 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/179a9993-2883-4f19-9c6e-694735342028-catalog-content\") pod \"redhat-marketplace-smnz5\" (UID: \"179a9993-2883-4f19-9c6e-694735342028\") " pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.777435 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/179a9993-2883-4f19-9c6e-694735342028-utilities\") pod \"redhat-marketplace-smnz5\" (UID: \"179a9993-2883-4f19-9c6e-694735342028\") " pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.777515 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb5b6\" (UniqueName: \"kubernetes.io/projected/179a9993-2883-4f19-9c6e-694735342028-kube-api-access-jb5b6\") pod \"redhat-marketplace-smnz5\" (UID: \"179a9993-2883-4f19-9c6e-694735342028\") " pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.777658 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8pmv\" (UniqueName: \"kubernetes.io/projected/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-kube-api-access-z8pmv\") on node \"crc\" DevicePath \"\"" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.777682 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.879357 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/179a9993-2883-4f19-9c6e-694735342028-catalog-content\") pod \"redhat-marketplace-smnz5\" (UID: \"179a9993-2883-4f19-9c6e-694735342028\") " pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.879443 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/179a9993-2883-4f19-9c6e-694735342028-utilities\") pod \"redhat-marketplace-smnz5\" (UID: \"179a9993-2883-4f19-9c6e-694735342028\") " pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.879487 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb5b6\" (UniqueName: \"kubernetes.io/projected/179a9993-2883-4f19-9c6e-694735342028-kube-api-access-jb5b6\") pod \"redhat-marketplace-smnz5\" (UID: \"179a9993-2883-4f19-9c6e-694735342028\") " pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.879862 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/179a9993-2883-4f19-9c6e-694735342028-catalog-content\") pod \"redhat-marketplace-smnz5\" (UID: \"179a9993-2883-4f19-9c6e-694735342028\") " pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.880165 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/179a9993-2883-4f19-9c6e-694735342028-utilities\") pod \"redhat-marketplace-smnz5\" (UID: \"179a9993-2883-4f19-9c6e-694735342028\") " pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.898426 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb5b6\" (UniqueName: \"kubernetes.io/projected/179a9993-2883-4f19-9c6e-694735342028-kube-api-access-jb5b6\") pod \"redhat-marketplace-smnz5\" (UID: \"179a9993-2883-4f19-9c6e-694735342028\") " pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.912260 4593 generic.go:334] "Generic (PLEG): container finished" podID="1c76ee6e-190d-4dcf-9aa4-62557c0ee07a" containerID="bf5ccb85076ad56cb114d65b43dfb0cde9c800efbd2209ba3f59888cc0edaa6f" exitCode=0 Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.912315 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69vh6" event={"ID":"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a","Type":"ContainerDied","Data":"bf5ccb85076ad56cb114d65b43dfb0cde9c800efbd2209ba3f59888cc0edaa6f"} Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.912337 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.912355 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69vh6" event={"ID":"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a","Type":"ContainerDied","Data":"21eb6256f05b21a81f3c529ef18a59cfba08db30ea0577b58f3b450ba62f0f3f"} Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.912380 4593 scope.go:117] "RemoveContainer" containerID="bf5ccb85076ad56cb114d65b43dfb0cde9c800efbd2209ba3f59888cc0edaa6f" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.965537 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-69vh6"] Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.967301 4593 scope.go:117] "RemoveContainer" containerID="e64072a85fcee74e04835683a420cbde4a1984942a0cfff52032e3eb93b67c5e" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.983067 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-69vh6"] Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.989712 4593 scope.go:117] "RemoveContainer" containerID="dbd38eb8e7e4acf4e95c3c0522d3597248765922ad27202f1c27e877d32b2c0b" Jan 29 12:01:28 crc kubenswrapper[4593]: I0129 12:01:28.010487 4593 scope.go:117] "RemoveContainer" containerID="bf5ccb85076ad56cb114d65b43dfb0cde9c800efbd2209ba3f59888cc0edaa6f" Jan 29 12:01:28 crc kubenswrapper[4593]: E0129 12:01:28.011115 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf5ccb85076ad56cb114d65b43dfb0cde9c800efbd2209ba3f59888cc0edaa6f\": container with ID starting with bf5ccb85076ad56cb114d65b43dfb0cde9c800efbd2209ba3f59888cc0edaa6f not found: ID does not exist" containerID="bf5ccb85076ad56cb114d65b43dfb0cde9c800efbd2209ba3f59888cc0edaa6f" Jan 29 12:01:28 crc kubenswrapper[4593]: I0129 12:01:28.011349 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf5ccb85076ad56cb114d65b43dfb0cde9c800efbd2209ba3f59888cc0edaa6f"} err="failed to get container status \"bf5ccb85076ad56cb114d65b43dfb0cde9c800efbd2209ba3f59888cc0edaa6f\": rpc error: code = NotFound desc = could not find container \"bf5ccb85076ad56cb114d65b43dfb0cde9c800efbd2209ba3f59888cc0edaa6f\": container with ID starting with bf5ccb85076ad56cb114d65b43dfb0cde9c800efbd2209ba3f59888cc0edaa6f not found: ID does not exist" Jan 29 12:01:28 crc kubenswrapper[4593]: I0129 12:01:28.011375 4593 scope.go:117] "RemoveContainer" containerID="e64072a85fcee74e04835683a420cbde4a1984942a0cfff52032e3eb93b67c5e" Jan 29 12:01:28 crc kubenswrapper[4593]: E0129 12:01:28.011893 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e64072a85fcee74e04835683a420cbde4a1984942a0cfff52032e3eb93b67c5e\": container with ID starting with e64072a85fcee74e04835683a420cbde4a1984942a0cfff52032e3eb93b67c5e not found: ID does not exist" containerID="e64072a85fcee74e04835683a420cbde4a1984942a0cfff52032e3eb93b67c5e" Jan 29 12:01:28 crc kubenswrapper[4593]: I0129 12:01:28.011916 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e64072a85fcee74e04835683a420cbde4a1984942a0cfff52032e3eb93b67c5e"} err="failed to get container status \"e64072a85fcee74e04835683a420cbde4a1984942a0cfff52032e3eb93b67c5e\": rpc error: code = NotFound desc = could not find container \"e64072a85fcee74e04835683a420cbde4a1984942a0cfff52032e3eb93b67c5e\": container with ID starting with e64072a85fcee74e04835683a420cbde4a1984942a0cfff52032e3eb93b67c5e not found: ID does not exist" Jan 29 12:01:28 crc kubenswrapper[4593]: I0129 12:01:28.011933 4593 scope.go:117] "RemoveContainer" containerID="dbd38eb8e7e4acf4e95c3c0522d3597248765922ad27202f1c27e877d32b2c0b" Jan 29 12:01:28 crc kubenswrapper[4593]: E0129 12:01:28.012184 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbd38eb8e7e4acf4e95c3c0522d3597248765922ad27202f1c27e877d32b2c0b\": container with ID starting with dbd38eb8e7e4acf4e95c3c0522d3597248765922ad27202f1c27e877d32b2c0b not found: ID does not exist" containerID="dbd38eb8e7e4acf4e95c3c0522d3597248765922ad27202f1c27e877d32b2c0b" Jan 29 12:01:28 crc kubenswrapper[4593]: I0129 12:01:28.012216 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbd38eb8e7e4acf4e95c3c0522d3597248765922ad27202f1c27e877d32b2c0b"} err="failed to get container status \"dbd38eb8e7e4acf4e95c3c0522d3597248765922ad27202f1c27e877d32b2c0b\": rpc error: code = NotFound desc = could not find container \"dbd38eb8e7e4acf4e95c3c0522d3597248765922ad27202f1c27e877d32b2c0b\": container with ID starting with dbd38eb8e7e4acf4e95c3c0522d3597248765922ad27202f1c27e877d32b2c0b not found: ID does not exist" Jan 29 12:01:28 crc kubenswrapper[4593]: I0129 12:01:28.085364 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:28 crc kubenswrapper[4593]: I0129 12:01:28.613523 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-smnz5"] Jan 29 12:01:28 crc kubenswrapper[4593]: I0129 12:01:28.927324 4593 generic.go:334] "Generic (PLEG): container finished" podID="179a9993-2883-4f19-9c6e-694735342028" containerID="2f10096e94c62eedb1dbeceb665bc8f19c2daf35edddea65440c5db8583f1590" exitCode=0 Jan 29 12:01:28 crc kubenswrapper[4593]: I0129 12:01:28.927399 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-smnz5" event={"ID":"179a9993-2883-4f19-9c6e-694735342028","Type":"ContainerDied","Data":"2f10096e94c62eedb1dbeceb665bc8f19c2daf35edddea65440c5db8583f1590"} Jan 29 12:01:28 crc kubenswrapper[4593]: I0129 12:01:28.927429 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-smnz5" event={"ID":"179a9993-2883-4f19-9c6e-694735342028","Type":"ContainerStarted","Data":"d4efa2dcc8fad3c1791de98ad732751b7ce7b129092b4c0370f8969d147c47ee"} Jan 29 12:01:29 crc kubenswrapper[4593]: I0129 12:01:29.087185 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c76ee6e-190d-4dcf-9aa4-62557c0ee07a" path="/var/lib/kubelet/pods/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a/volumes" Jan 29 12:01:30 crc kubenswrapper[4593]: I0129 12:01:30.955420 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-smnz5" event={"ID":"179a9993-2883-4f19-9c6e-694735342028","Type":"ContainerStarted","Data":"e7310fbd04ccf37ffbccc22a735af572920edf03a9c7dcdb62c68debd9dcdd44"} Jan 29 12:01:31 crc kubenswrapper[4593]: I0129 12:01:31.968140 4593 generic.go:334] "Generic (PLEG): container finished" podID="179a9993-2883-4f19-9c6e-694735342028" containerID="e7310fbd04ccf37ffbccc22a735af572920edf03a9c7dcdb62c68debd9dcdd44" exitCode=0 Jan 29 12:01:31 crc kubenswrapper[4593]: I0129 12:01:31.968220 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-smnz5" event={"ID":"179a9993-2883-4f19-9c6e-694735342028","Type":"ContainerDied","Data":"e7310fbd04ccf37ffbccc22a735af572920edf03a9c7dcdb62c68debd9dcdd44"} Jan 29 12:01:32 crc kubenswrapper[4593]: I0129 12:01:32.982536 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-smnz5" event={"ID":"179a9993-2883-4f19-9c6e-694735342028","Type":"ContainerStarted","Data":"c78fa890dc0c14220e50f383ce30e2165a48033e23983baf5819b835a8e6d625"} Jan 29 12:01:33 crc kubenswrapper[4593]: I0129 12:01:33.011695 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-smnz5" podStartSLOduration=2.443026248 podStartE2EDuration="6.011649474s" podCreationTimestamp="2026-01-29 12:01:27 +0000 UTC" firstStartedPulling="2026-01-29 12:01:28.930648837 +0000 UTC m=+3754.803683028" lastFinishedPulling="2026-01-29 12:01:32.499272063 +0000 UTC m=+3758.372306254" observedRunningTime="2026-01-29 12:01:33.0007874 +0000 UTC m=+3758.873821591" watchObservedRunningTime="2026-01-29 12:01:33.011649474 +0000 UTC m=+3758.884683675" Jan 29 12:01:38 crc kubenswrapper[4593]: I0129 12:01:38.086532 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:38 crc kubenswrapper[4593]: I0129 12:01:38.087395 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:38 crc kubenswrapper[4593]: I0129 12:01:38.251038 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:39 crc kubenswrapper[4593]: I0129 12:01:39.090553 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:39 crc kubenswrapper[4593]: I0129 12:01:39.144304 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-smnz5"] Jan 29 12:01:41 crc kubenswrapper[4593]: I0129 12:01:41.057813 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-smnz5" podUID="179a9993-2883-4f19-9c6e-694735342028" containerName="registry-server" containerID="cri-o://c78fa890dc0c14220e50f383ce30e2165a48033e23983baf5819b835a8e6d625" gracePeriod=2 Jan 29 12:01:41 crc kubenswrapper[4593]: I0129 12:01:41.751278 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:41 crc kubenswrapper[4593]: I0129 12:01:41.858021 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/179a9993-2883-4f19-9c6e-694735342028-catalog-content\") pod \"179a9993-2883-4f19-9c6e-694735342028\" (UID: \"179a9993-2883-4f19-9c6e-694735342028\") " Jan 29 12:01:41 crc kubenswrapper[4593]: I0129 12:01:41.858275 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb5b6\" (UniqueName: \"kubernetes.io/projected/179a9993-2883-4f19-9c6e-694735342028-kube-api-access-jb5b6\") pod \"179a9993-2883-4f19-9c6e-694735342028\" (UID: \"179a9993-2883-4f19-9c6e-694735342028\") " Jan 29 12:01:41 crc kubenswrapper[4593]: I0129 12:01:41.858482 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/179a9993-2883-4f19-9c6e-694735342028-utilities\") pod \"179a9993-2883-4f19-9c6e-694735342028\" (UID: \"179a9993-2883-4f19-9c6e-694735342028\") " Jan 29 12:01:41 crc kubenswrapper[4593]: I0129 12:01:41.859868 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/179a9993-2883-4f19-9c6e-694735342028-utilities" (OuterVolumeSpecName: "utilities") pod "179a9993-2883-4f19-9c6e-694735342028" (UID: "179a9993-2883-4f19-9c6e-694735342028"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:01:41 crc kubenswrapper[4593]: I0129 12:01:41.887087 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/179a9993-2883-4f19-9c6e-694735342028-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "179a9993-2883-4f19-9c6e-694735342028" (UID: "179a9993-2883-4f19-9c6e-694735342028"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:01:41 crc kubenswrapper[4593]: I0129 12:01:41.887475 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/179a9993-2883-4f19-9c6e-694735342028-kube-api-access-jb5b6" (OuterVolumeSpecName: "kube-api-access-jb5b6") pod "179a9993-2883-4f19-9c6e-694735342028" (UID: "179a9993-2883-4f19-9c6e-694735342028"). InnerVolumeSpecName "kube-api-access-jb5b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:01:41 crc kubenswrapper[4593]: I0129 12:01:41.960754 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/179a9993-2883-4f19-9c6e-694735342028-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:01:41 crc kubenswrapper[4593]: I0129 12:01:41.961070 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/179a9993-2883-4f19-9c6e-694735342028-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:01:41 crc kubenswrapper[4593]: I0129 12:01:41.961082 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb5b6\" (UniqueName: \"kubernetes.io/projected/179a9993-2883-4f19-9c6e-694735342028-kube-api-access-jb5b6\") on node \"crc\" DevicePath \"\"" Jan 29 12:01:42 crc kubenswrapper[4593]: I0129 12:01:42.069434 4593 generic.go:334] "Generic (PLEG): container finished" podID="179a9993-2883-4f19-9c6e-694735342028" containerID="c78fa890dc0c14220e50f383ce30e2165a48033e23983baf5819b835a8e6d625" exitCode=0 Jan 29 12:01:42 crc kubenswrapper[4593]: I0129 12:01:42.069481 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:42 crc kubenswrapper[4593]: I0129 12:01:42.069486 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-smnz5" event={"ID":"179a9993-2883-4f19-9c6e-694735342028","Type":"ContainerDied","Data":"c78fa890dc0c14220e50f383ce30e2165a48033e23983baf5819b835a8e6d625"} Jan 29 12:01:42 crc kubenswrapper[4593]: I0129 12:01:42.069516 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-smnz5" event={"ID":"179a9993-2883-4f19-9c6e-694735342028","Type":"ContainerDied","Data":"d4efa2dcc8fad3c1791de98ad732751b7ce7b129092b4c0370f8969d147c47ee"} Jan 29 12:01:42 crc kubenswrapper[4593]: I0129 12:01:42.069553 4593 scope.go:117] "RemoveContainer" containerID="c78fa890dc0c14220e50f383ce30e2165a48033e23983baf5819b835a8e6d625" Jan 29 12:01:42 crc kubenswrapper[4593]: I0129 12:01:42.093306 4593 scope.go:117] "RemoveContainer" containerID="e7310fbd04ccf37ffbccc22a735af572920edf03a9c7dcdb62c68debd9dcdd44" Jan 29 12:01:42 crc kubenswrapper[4593]: I0129 12:01:42.104780 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-smnz5"] Jan 29 12:01:42 crc kubenswrapper[4593]: I0129 12:01:42.124598 4593 scope.go:117] "RemoveContainer" containerID="2f10096e94c62eedb1dbeceb665bc8f19c2daf35edddea65440c5db8583f1590" Jan 29 12:01:42 crc kubenswrapper[4593]: I0129 12:01:42.145990 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-smnz5"] Jan 29 12:01:42 crc kubenswrapper[4593]: I0129 12:01:42.176835 4593 scope.go:117] "RemoveContainer" containerID="c78fa890dc0c14220e50f383ce30e2165a48033e23983baf5819b835a8e6d625" Jan 29 12:01:42 crc kubenswrapper[4593]: E0129 12:01:42.177449 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c78fa890dc0c14220e50f383ce30e2165a48033e23983baf5819b835a8e6d625\": container with ID starting with c78fa890dc0c14220e50f383ce30e2165a48033e23983baf5819b835a8e6d625 not found: ID does not exist" containerID="c78fa890dc0c14220e50f383ce30e2165a48033e23983baf5819b835a8e6d625" Jan 29 12:01:42 crc kubenswrapper[4593]: I0129 12:01:42.177504 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c78fa890dc0c14220e50f383ce30e2165a48033e23983baf5819b835a8e6d625"} err="failed to get container status \"c78fa890dc0c14220e50f383ce30e2165a48033e23983baf5819b835a8e6d625\": rpc error: code = NotFound desc = could not find container \"c78fa890dc0c14220e50f383ce30e2165a48033e23983baf5819b835a8e6d625\": container with ID starting with c78fa890dc0c14220e50f383ce30e2165a48033e23983baf5819b835a8e6d625 not found: ID does not exist" Jan 29 12:01:42 crc kubenswrapper[4593]: I0129 12:01:42.177532 4593 scope.go:117] "RemoveContainer" containerID="e7310fbd04ccf37ffbccc22a735af572920edf03a9c7dcdb62c68debd9dcdd44" Jan 29 12:01:42 crc kubenswrapper[4593]: E0129 12:01:42.178010 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7310fbd04ccf37ffbccc22a735af572920edf03a9c7dcdb62c68debd9dcdd44\": container with ID starting with e7310fbd04ccf37ffbccc22a735af572920edf03a9c7dcdb62c68debd9dcdd44 not found: ID does not exist" containerID="e7310fbd04ccf37ffbccc22a735af572920edf03a9c7dcdb62c68debd9dcdd44" Jan 29 12:01:42 crc kubenswrapper[4593]: I0129 12:01:42.178043 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7310fbd04ccf37ffbccc22a735af572920edf03a9c7dcdb62c68debd9dcdd44"} err="failed to get container status \"e7310fbd04ccf37ffbccc22a735af572920edf03a9c7dcdb62c68debd9dcdd44\": rpc error: code = NotFound desc = could not find container \"e7310fbd04ccf37ffbccc22a735af572920edf03a9c7dcdb62c68debd9dcdd44\": container with ID starting with e7310fbd04ccf37ffbccc22a735af572920edf03a9c7dcdb62c68debd9dcdd44 not found: ID does not exist" Jan 29 12:01:42 crc kubenswrapper[4593]: I0129 12:01:42.178058 4593 scope.go:117] "RemoveContainer" containerID="2f10096e94c62eedb1dbeceb665bc8f19c2daf35edddea65440c5db8583f1590" Jan 29 12:01:42 crc kubenswrapper[4593]: E0129 12:01:42.179540 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f10096e94c62eedb1dbeceb665bc8f19c2daf35edddea65440c5db8583f1590\": container with ID starting with 2f10096e94c62eedb1dbeceb665bc8f19c2daf35edddea65440c5db8583f1590 not found: ID does not exist" containerID="2f10096e94c62eedb1dbeceb665bc8f19c2daf35edddea65440c5db8583f1590" Jan 29 12:01:42 crc kubenswrapper[4593]: I0129 12:01:42.179580 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f10096e94c62eedb1dbeceb665bc8f19c2daf35edddea65440c5db8583f1590"} err="failed to get container status \"2f10096e94c62eedb1dbeceb665bc8f19c2daf35edddea65440c5db8583f1590\": rpc error: code = NotFound desc = could not find container \"2f10096e94c62eedb1dbeceb665bc8f19c2daf35edddea65440c5db8583f1590\": container with ID starting with 2f10096e94c62eedb1dbeceb665bc8f19c2daf35edddea65440c5db8583f1590 not found: ID does not exist" Jan 29 12:01:43 crc kubenswrapper[4593]: I0129 12:01:43.087966 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="179a9993-2883-4f19-9c6e-694735342028" path="/var/lib/kubelet/pods/179a9993-2883-4f19-9c6e-694735342028/volumes" Jan 29 12:02:03 crc kubenswrapper[4593]: I0129 12:02:03.946273 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:02:03 crc kubenswrapper[4593]: I0129 12:02:03.946848 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:02:33 crc kubenswrapper[4593]: I0129 12:02:33.946733 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:02:33 crc kubenswrapper[4593]: I0129 12:02:33.947235 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:03:03 crc kubenswrapper[4593]: I0129 12:03:03.946757 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:03:03 crc kubenswrapper[4593]: I0129 12:03:03.947332 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:03:03 crc kubenswrapper[4593]: I0129 12:03:03.947390 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 12:03:03 crc kubenswrapper[4593]: I0129 12:03:03.948265 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:03:03 crc kubenswrapper[4593]: I0129 12:03:03.948332 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" gracePeriod=600 Jan 29 12:03:04 crc kubenswrapper[4593]: E0129 12:03:04.078038 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:03:04 crc kubenswrapper[4593]: I0129 12:03:04.943438 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" exitCode=0 Jan 29 12:03:04 crc kubenswrapper[4593]: I0129 12:03:04.943493 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0"} Jan 29 12:03:04 crc kubenswrapper[4593]: I0129 12:03:04.943556 4593 scope.go:117] "RemoveContainer" containerID="e0a8bd46a646bdb78b7f5e35dccce37cceaacf8fb67f1dfa0ed9e182128af8b1" Jan 29 12:03:04 crc kubenswrapper[4593]: I0129 12:03:04.944418 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:03:04 crc kubenswrapper[4593]: E0129 12:03:04.944839 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:03:20 crc kubenswrapper[4593]: I0129 12:03:20.075190 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:03:20 crc kubenswrapper[4593]: E0129 12:03:20.076064 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:03:34 crc kubenswrapper[4593]: I0129 12:03:34.075486 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:03:34 crc kubenswrapper[4593]: E0129 12:03:34.076426 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:03:46 crc kubenswrapper[4593]: I0129 12:03:46.076141 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:03:46 crc kubenswrapper[4593]: E0129 12:03:46.078131 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:04:00 crc kubenswrapper[4593]: I0129 12:04:00.075309 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:04:00 crc kubenswrapper[4593]: E0129 12:04:00.077137 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:04:13 crc kubenswrapper[4593]: I0129 12:04:13.074977 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:04:13 crc kubenswrapper[4593]: E0129 12:04:13.078215 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:04:25 crc kubenswrapper[4593]: I0129 12:04:25.081964 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:04:25 crc kubenswrapper[4593]: E0129 12:04:25.082759 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:04:36 crc kubenswrapper[4593]: I0129 12:04:36.074956 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:04:36 crc kubenswrapper[4593]: E0129 12:04:36.075752 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:04:47 crc kubenswrapper[4593]: I0129 12:04:47.075188 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:04:47 crc kubenswrapper[4593]: E0129 12:04:47.075968 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:04:59 crc kubenswrapper[4593]: I0129 12:04:59.076837 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:04:59 crc kubenswrapper[4593]: E0129 12:04:59.077598 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.244591 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k7vkk"] Jan 29 12:05:06 crc kubenswrapper[4593]: E0129 12:05:06.245615 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="179a9993-2883-4f19-9c6e-694735342028" containerName="registry-server" Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.245664 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="179a9993-2883-4f19-9c6e-694735342028" containerName="registry-server" Jan 29 12:05:06 crc kubenswrapper[4593]: E0129 12:05:06.245687 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="179a9993-2883-4f19-9c6e-694735342028" containerName="extract-utilities" Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.245694 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="179a9993-2883-4f19-9c6e-694735342028" containerName="extract-utilities" Jan 29 12:05:06 crc kubenswrapper[4593]: E0129 12:05:06.245709 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="179a9993-2883-4f19-9c6e-694735342028" containerName="extract-content" Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.245716 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="179a9993-2883-4f19-9c6e-694735342028" containerName="extract-content" Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.245960 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="179a9993-2883-4f19-9c6e-694735342028" containerName="registry-server" Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.247395 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.278599 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k7vkk"] Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.305868 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67146159-618b-4376-89e9-4c4433776a79-utilities\") pod \"redhat-operators-k7vkk\" (UID: \"67146159-618b-4376-89e9-4c4433776a79\") " pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.306027 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67146159-618b-4376-89e9-4c4433776a79-catalog-content\") pod \"redhat-operators-k7vkk\" (UID: \"67146159-618b-4376-89e9-4c4433776a79\") " pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.306122 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shcx5\" (UniqueName: \"kubernetes.io/projected/67146159-618b-4376-89e9-4c4433776a79-kube-api-access-shcx5\") pod \"redhat-operators-k7vkk\" (UID: \"67146159-618b-4376-89e9-4c4433776a79\") " pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.408223 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67146159-618b-4376-89e9-4c4433776a79-utilities\") pod \"redhat-operators-k7vkk\" (UID: \"67146159-618b-4376-89e9-4c4433776a79\") " pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.408589 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67146159-618b-4376-89e9-4c4433776a79-catalog-content\") pod \"redhat-operators-k7vkk\" (UID: \"67146159-618b-4376-89e9-4c4433776a79\") " pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.408667 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shcx5\" (UniqueName: \"kubernetes.io/projected/67146159-618b-4376-89e9-4c4433776a79-kube-api-access-shcx5\") pod \"redhat-operators-k7vkk\" (UID: \"67146159-618b-4376-89e9-4c4433776a79\") " pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.408836 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67146159-618b-4376-89e9-4c4433776a79-utilities\") pod \"redhat-operators-k7vkk\" (UID: \"67146159-618b-4376-89e9-4c4433776a79\") " pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.409326 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67146159-618b-4376-89e9-4c4433776a79-catalog-content\") pod \"redhat-operators-k7vkk\" (UID: \"67146159-618b-4376-89e9-4c4433776a79\") " pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.437078 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shcx5\" (UniqueName: \"kubernetes.io/projected/67146159-618b-4376-89e9-4c4433776a79-kube-api-access-shcx5\") pod \"redhat-operators-k7vkk\" (UID: \"67146159-618b-4376-89e9-4c4433776a79\") " pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.582620 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:05:07 crc kubenswrapper[4593]: I0129 12:05:07.310177 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k7vkk"] Jan 29 12:05:08 crc kubenswrapper[4593]: I0129 12:05:08.044248 4593 generic.go:334] "Generic (PLEG): container finished" podID="67146159-618b-4376-89e9-4c4433776a79" containerID="8d22093bb0433d57ba4af0c4dc12d757c6b02132977c80845c4c07f793d8a283" exitCode=0 Jan 29 12:05:08 crc kubenswrapper[4593]: I0129 12:05:08.044333 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7vkk" event={"ID":"67146159-618b-4376-89e9-4c4433776a79","Type":"ContainerDied","Data":"8d22093bb0433d57ba4af0c4dc12d757c6b02132977c80845c4c07f793d8a283"} Jan 29 12:05:08 crc kubenswrapper[4593]: I0129 12:05:08.045169 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7vkk" event={"ID":"67146159-618b-4376-89e9-4c4433776a79","Type":"ContainerStarted","Data":"423b79897654c7bfeba89f8b2ffde23e4d2402031fa3c58273297441a72736dd"} Jan 29 12:05:09 crc kubenswrapper[4593]: I0129 12:05:09.060843 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7vkk" event={"ID":"67146159-618b-4376-89e9-4c4433776a79","Type":"ContainerStarted","Data":"903179b1a5123d41188d675ef19e4b23549a769ed206e5aeb71733e3c6d173cd"} Jan 29 12:05:14 crc kubenswrapper[4593]: I0129 12:05:14.075244 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:05:14 crc kubenswrapper[4593]: E0129 12:05:14.076272 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:05:17 crc kubenswrapper[4593]: I0129 12:05:17.271341 4593 generic.go:334] "Generic (PLEG): container finished" podID="67146159-618b-4376-89e9-4c4433776a79" containerID="903179b1a5123d41188d675ef19e4b23549a769ed206e5aeb71733e3c6d173cd" exitCode=0 Jan 29 12:05:17 crc kubenswrapper[4593]: I0129 12:05:17.271672 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7vkk" event={"ID":"67146159-618b-4376-89e9-4c4433776a79","Type":"ContainerDied","Data":"903179b1a5123d41188d675ef19e4b23549a769ed206e5aeb71733e3c6d173cd"} Jan 29 12:05:18 crc kubenswrapper[4593]: I0129 12:05:18.281316 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7vkk" event={"ID":"67146159-618b-4376-89e9-4c4433776a79","Type":"ContainerStarted","Data":"fb97502924f771b8811dfeb8fae54dde5dac5f5d5a4c09423646accb2a0f8e52"} Jan 29 12:05:26 crc kubenswrapper[4593]: I0129 12:05:26.583732 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:05:26 crc kubenswrapper[4593]: I0129 12:05:26.584314 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:05:27 crc kubenswrapper[4593]: I0129 12:05:27.632567 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k7vkk" podUID="67146159-618b-4376-89e9-4c4433776a79" containerName="registry-server" probeResult="failure" output=< Jan 29 12:05:27 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:05:27 crc kubenswrapper[4593]: > Jan 29 12:05:29 crc kubenswrapper[4593]: I0129 12:05:29.074993 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:05:29 crc kubenswrapper[4593]: E0129 12:05:29.075540 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:05:37 crc kubenswrapper[4593]: I0129 12:05:37.634156 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k7vkk" podUID="67146159-618b-4376-89e9-4c4433776a79" containerName="registry-server" probeResult="failure" output=< Jan 29 12:05:37 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:05:37 crc kubenswrapper[4593]: > Jan 29 12:05:44 crc kubenswrapper[4593]: I0129 12:05:44.075784 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:05:44 crc kubenswrapper[4593]: E0129 12:05:44.076673 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:05:47 crc kubenswrapper[4593]: I0129 12:05:47.630193 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k7vkk" podUID="67146159-618b-4376-89e9-4c4433776a79" containerName="registry-server" probeResult="failure" output=< Jan 29 12:05:47 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:05:47 crc kubenswrapper[4593]: > Jan 29 12:05:56 crc kubenswrapper[4593]: I0129 12:05:56.647210 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:05:56 crc kubenswrapper[4593]: I0129 12:05:56.672443 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k7vkk" podStartSLOduration=41.003838628 podStartE2EDuration="50.672405524s" podCreationTimestamp="2026-01-29 12:05:06 +0000 UTC" firstStartedPulling="2026-01-29 12:05:08.048100037 +0000 UTC m=+3973.921134228" lastFinishedPulling="2026-01-29 12:05:17.716666933 +0000 UTC m=+3983.589701124" observedRunningTime="2026-01-29 12:05:18.304620909 +0000 UTC m=+3984.177655110" watchObservedRunningTime="2026-01-29 12:05:56.672405524 +0000 UTC m=+4022.545439715" Jan 29 12:05:56 crc kubenswrapper[4593]: I0129 12:05:56.698913 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:05:56 crc kubenswrapper[4593]: I0129 12:05:56.897384 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k7vkk"] Jan 29 12:05:57 crc kubenswrapper[4593]: I0129 12:05:57.074890 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:05:57 crc kubenswrapper[4593]: E0129 12:05:57.075218 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:05:58 crc kubenswrapper[4593]: I0129 12:05:58.083440 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k7vkk" podUID="67146159-618b-4376-89e9-4c4433776a79" containerName="registry-server" containerID="cri-o://fb97502924f771b8811dfeb8fae54dde5dac5f5d5a4c09423646accb2a0f8e52" gracePeriod=2 Jan 29 12:05:59 crc kubenswrapper[4593]: I0129 12:05:59.117840 4593 generic.go:334] "Generic (PLEG): container finished" podID="67146159-618b-4376-89e9-4c4433776a79" containerID="fb97502924f771b8811dfeb8fae54dde5dac5f5d5a4c09423646accb2a0f8e52" exitCode=0 Jan 29 12:05:59 crc kubenswrapper[4593]: I0129 12:05:59.118064 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7vkk" event={"ID":"67146159-618b-4376-89e9-4c4433776a79","Type":"ContainerDied","Data":"fb97502924f771b8811dfeb8fae54dde5dac5f5d5a4c09423646accb2a0f8e52"} Jan 29 12:05:59 crc kubenswrapper[4593]: I0129 12:05:59.199607 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:05:59 crc kubenswrapper[4593]: I0129 12:05:59.322913 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67146159-618b-4376-89e9-4c4433776a79-utilities\") pod \"67146159-618b-4376-89e9-4c4433776a79\" (UID: \"67146159-618b-4376-89e9-4c4433776a79\") " Jan 29 12:05:59 crc kubenswrapper[4593]: I0129 12:05:59.323085 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67146159-618b-4376-89e9-4c4433776a79-catalog-content\") pod \"67146159-618b-4376-89e9-4c4433776a79\" (UID: \"67146159-618b-4376-89e9-4c4433776a79\") " Jan 29 12:05:59 crc kubenswrapper[4593]: I0129 12:05:59.323299 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shcx5\" (UniqueName: \"kubernetes.io/projected/67146159-618b-4376-89e9-4c4433776a79-kube-api-access-shcx5\") pod \"67146159-618b-4376-89e9-4c4433776a79\" (UID: \"67146159-618b-4376-89e9-4c4433776a79\") " Jan 29 12:05:59 crc kubenswrapper[4593]: I0129 12:05:59.324924 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67146159-618b-4376-89e9-4c4433776a79-utilities" (OuterVolumeSpecName: "utilities") pod "67146159-618b-4376-89e9-4c4433776a79" (UID: "67146159-618b-4376-89e9-4c4433776a79"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:05:59 crc kubenswrapper[4593]: I0129 12:05:59.332024 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67146159-618b-4376-89e9-4c4433776a79-kube-api-access-shcx5" (OuterVolumeSpecName: "kube-api-access-shcx5") pod "67146159-618b-4376-89e9-4c4433776a79" (UID: "67146159-618b-4376-89e9-4c4433776a79"). InnerVolumeSpecName "kube-api-access-shcx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:05:59 crc kubenswrapper[4593]: I0129 12:05:59.426324 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shcx5\" (UniqueName: \"kubernetes.io/projected/67146159-618b-4376-89e9-4c4433776a79-kube-api-access-shcx5\") on node \"crc\" DevicePath \"\"" Jan 29 12:05:59 crc kubenswrapper[4593]: I0129 12:05:59.426365 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67146159-618b-4376-89e9-4c4433776a79-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:05:59 crc kubenswrapper[4593]: I0129 12:05:59.451947 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67146159-618b-4376-89e9-4c4433776a79-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67146159-618b-4376-89e9-4c4433776a79" (UID: "67146159-618b-4376-89e9-4c4433776a79"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:05:59 crc kubenswrapper[4593]: I0129 12:05:59.528495 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67146159-618b-4376-89e9-4c4433776a79-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:00 crc kubenswrapper[4593]: I0129 12:06:00.132735 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7vkk" event={"ID":"67146159-618b-4376-89e9-4c4433776a79","Type":"ContainerDied","Data":"423b79897654c7bfeba89f8b2ffde23e4d2402031fa3c58273297441a72736dd"} Jan 29 12:06:00 crc kubenswrapper[4593]: I0129 12:06:00.132887 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:06:00 crc kubenswrapper[4593]: I0129 12:06:00.133092 4593 scope.go:117] "RemoveContainer" containerID="fb97502924f771b8811dfeb8fae54dde5dac5f5d5a4c09423646accb2a0f8e52" Jan 29 12:06:00 crc kubenswrapper[4593]: I0129 12:06:00.161347 4593 scope.go:117] "RemoveContainer" containerID="903179b1a5123d41188d675ef19e4b23549a769ed206e5aeb71733e3c6d173cd" Jan 29 12:06:00 crc kubenswrapper[4593]: I0129 12:06:00.171772 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k7vkk"] Jan 29 12:06:00 crc kubenswrapper[4593]: I0129 12:06:00.186325 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k7vkk"] Jan 29 12:06:00 crc kubenswrapper[4593]: I0129 12:06:00.188455 4593 scope.go:117] "RemoveContainer" containerID="8d22093bb0433d57ba4af0c4dc12d757c6b02132977c80845c4c07f793d8a283" Jan 29 12:06:01 crc kubenswrapper[4593]: I0129 12:06:01.090723 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67146159-618b-4376-89e9-4c4433776a79" path="/var/lib/kubelet/pods/67146159-618b-4376-89e9-4c4433776a79/volumes" Jan 29 12:06:12 crc kubenswrapper[4593]: I0129 12:06:12.075010 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:06:12 crc kubenswrapper[4593]: E0129 12:06:12.075962 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:06:27 crc kubenswrapper[4593]: I0129 12:06:27.075369 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:06:27 crc kubenswrapper[4593]: E0129 12:06:27.076733 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:06:39 crc kubenswrapper[4593]: I0129 12:06:39.075407 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:06:39 crc kubenswrapper[4593]: E0129 12:06:39.076227 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:06:51 crc kubenswrapper[4593]: I0129 12:06:51.091523 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:06:51 crc kubenswrapper[4593]: E0129 12:06:51.092913 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:07:05 crc kubenswrapper[4593]: I0129 12:07:05.110675 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:07:05 crc kubenswrapper[4593]: E0129 12:07:05.112053 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:07:16 crc kubenswrapper[4593]: I0129 12:07:16.074578 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:07:16 crc kubenswrapper[4593]: E0129 12:07:16.075339 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:07:28 crc kubenswrapper[4593]: I0129 12:07:28.075101 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:07:28 crc kubenswrapper[4593]: E0129 12:07:28.075853 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:07:41 crc kubenswrapper[4593]: I0129 12:07:41.075718 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:07:41 crc kubenswrapper[4593]: E0129 12:07:41.076560 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:07:52 crc kubenswrapper[4593]: I0129 12:07:52.077221 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:07:52 crc kubenswrapper[4593]: E0129 12:07:52.078088 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:08:06 crc kubenswrapper[4593]: I0129 12:08:06.075985 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:08:06 crc kubenswrapper[4593]: I0129 12:08:06.715388 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"e17e203ea610856274105cc5fc7a47b3a11ad9dc0a91cefedfbfe32379366f89"} Jan 29 12:10:33 crc kubenswrapper[4593]: I0129 12:10:33.946151 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:10:33 crc kubenswrapper[4593]: I0129 12:10:33.946869 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:11:03 crc kubenswrapper[4593]: I0129 12:11:03.945883 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:11:03 crc kubenswrapper[4593]: I0129 12:11:03.946455 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.521192 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4zmb4"] Jan 29 12:11:19 crc kubenswrapper[4593]: E0129 12:11:19.522315 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67146159-618b-4376-89e9-4c4433776a79" containerName="extract-content" Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.522349 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="67146159-618b-4376-89e9-4c4433776a79" containerName="extract-content" Jan 29 12:11:19 crc kubenswrapper[4593]: E0129 12:11:19.522377 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67146159-618b-4376-89e9-4c4433776a79" containerName="registry-server" Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.522388 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="67146159-618b-4376-89e9-4c4433776a79" containerName="registry-server" Jan 29 12:11:19 crc kubenswrapper[4593]: E0129 12:11:19.522409 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67146159-618b-4376-89e9-4c4433776a79" containerName="extract-utilities" Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.522418 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="67146159-618b-4376-89e9-4c4433776a79" containerName="extract-utilities" Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.522692 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="67146159-618b-4376-89e9-4c4433776a79" containerName="registry-server" Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.524170 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.545001 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4zmb4"] Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.693231 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61af0d72-8d15-4bf9-90f3-514d5a35adeb-catalog-content\") pod \"certified-operators-4zmb4\" (UID: \"61af0d72-8d15-4bf9-90f3-514d5a35adeb\") " pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.693394 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcwj7\" (UniqueName: \"kubernetes.io/projected/61af0d72-8d15-4bf9-90f3-514d5a35adeb-kube-api-access-rcwj7\") pod \"certified-operators-4zmb4\" (UID: \"61af0d72-8d15-4bf9-90f3-514d5a35adeb\") " pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.693456 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61af0d72-8d15-4bf9-90f3-514d5a35adeb-utilities\") pod \"certified-operators-4zmb4\" (UID: \"61af0d72-8d15-4bf9-90f3-514d5a35adeb\") " pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.795679 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcwj7\" (UniqueName: \"kubernetes.io/projected/61af0d72-8d15-4bf9-90f3-514d5a35adeb-kube-api-access-rcwj7\") pod \"certified-operators-4zmb4\" (UID: \"61af0d72-8d15-4bf9-90f3-514d5a35adeb\") " pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.795770 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61af0d72-8d15-4bf9-90f3-514d5a35adeb-utilities\") pod \"certified-operators-4zmb4\" (UID: \"61af0d72-8d15-4bf9-90f3-514d5a35adeb\") " pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.795860 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61af0d72-8d15-4bf9-90f3-514d5a35adeb-catalog-content\") pod \"certified-operators-4zmb4\" (UID: \"61af0d72-8d15-4bf9-90f3-514d5a35adeb\") " pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.796318 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61af0d72-8d15-4bf9-90f3-514d5a35adeb-catalog-content\") pod \"certified-operators-4zmb4\" (UID: \"61af0d72-8d15-4bf9-90f3-514d5a35adeb\") " pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.796625 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61af0d72-8d15-4bf9-90f3-514d5a35adeb-utilities\") pod \"certified-operators-4zmb4\" (UID: \"61af0d72-8d15-4bf9-90f3-514d5a35adeb\") " pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.825868 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcwj7\" (UniqueName: \"kubernetes.io/projected/61af0d72-8d15-4bf9-90f3-514d5a35adeb-kube-api-access-rcwj7\") pod \"certified-operators-4zmb4\" (UID: \"61af0d72-8d15-4bf9-90f3-514d5a35adeb\") " pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.845854 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:20 crc kubenswrapper[4593]: I0129 12:11:20.455681 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4zmb4"] Jan 29 12:11:20 crc kubenswrapper[4593]: I0129 12:11:20.916012 4593 generic.go:334] "Generic (PLEG): container finished" podID="61af0d72-8d15-4bf9-90f3-514d5a35adeb" containerID="e974cfd4ba99c10cc2aad6fe3294ee279ef945d78da77b5575efff84d75dc3f5" exitCode=0 Jan 29 12:11:20 crc kubenswrapper[4593]: I0129 12:11:20.916204 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zmb4" event={"ID":"61af0d72-8d15-4bf9-90f3-514d5a35adeb","Type":"ContainerDied","Data":"e974cfd4ba99c10cc2aad6fe3294ee279ef945d78da77b5575efff84d75dc3f5"} Jan 29 12:11:20 crc kubenswrapper[4593]: I0129 12:11:20.916338 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zmb4" event={"ID":"61af0d72-8d15-4bf9-90f3-514d5a35adeb","Type":"ContainerStarted","Data":"0f8b5557b97ae87240ce95f6ce1826bf3eddc35e903219d0aa779451e8a2b146"} Jan 29 12:11:20 crc kubenswrapper[4593]: I0129 12:11:20.919398 4593 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 12:11:22 crc kubenswrapper[4593]: I0129 12:11:22.950691 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zmb4" event={"ID":"61af0d72-8d15-4bf9-90f3-514d5a35adeb","Type":"ContainerStarted","Data":"9721d75f517671802e10383aaf0d51740b457133fabbb1bb0666df1729b46536"} Jan 29 12:11:26 crc kubenswrapper[4593]: I0129 12:11:26.987416 4593 generic.go:334] "Generic (PLEG): container finished" podID="61af0d72-8d15-4bf9-90f3-514d5a35adeb" containerID="9721d75f517671802e10383aaf0d51740b457133fabbb1bb0666df1729b46536" exitCode=0 Jan 29 12:11:26 crc kubenswrapper[4593]: I0129 12:11:26.987487 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zmb4" event={"ID":"61af0d72-8d15-4bf9-90f3-514d5a35adeb","Type":"ContainerDied","Data":"9721d75f517671802e10383aaf0d51740b457133fabbb1bb0666df1729b46536"} Jan 29 12:11:28 crc kubenswrapper[4593]: I0129 12:11:28.001236 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zmb4" event={"ID":"61af0d72-8d15-4bf9-90f3-514d5a35adeb","Type":"ContainerStarted","Data":"d95a803073d6be732010713f64b21e2542e0573ccca5a3e98a37ffc8b97ffb0a"} Jan 29 12:11:28 crc kubenswrapper[4593]: I0129 12:11:28.023311 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4zmb4" podStartSLOduration=2.553056247 podStartE2EDuration="9.02325509s" podCreationTimestamp="2026-01-29 12:11:19 +0000 UTC" firstStartedPulling="2026-01-29 12:11:20.919063241 +0000 UTC m=+4346.792097432" lastFinishedPulling="2026-01-29 12:11:27.389262084 +0000 UTC m=+4353.262296275" observedRunningTime="2026-01-29 12:11:28.019881249 +0000 UTC m=+4353.892915450" watchObservedRunningTime="2026-01-29 12:11:28.02325509 +0000 UTC m=+4353.896289311" Jan 29 12:11:29 crc kubenswrapper[4593]: I0129 12:11:29.847283 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:29 crc kubenswrapper[4593]: I0129 12:11:29.847683 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:29 crc kubenswrapper[4593]: I0129 12:11:29.900212 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:33 crc kubenswrapper[4593]: I0129 12:11:33.947045 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:11:33 crc kubenswrapper[4593]: I0129 12:11:33.947532 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:11:33 crc kubenswrapper[4593]: I0129 12:11:33.947582 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 12:11:33 crc kubenswrapper[4593]: I0129 12:11:33.949874 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e17e203ea610856274105cc5fc7a47b3a11ad9dc0a91cefedfbfe32379366f89"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:11:33 crc kubenswrapper[4593]: I0129 12:11:33.949959 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://e17e203ea610856274105cc5fc7a47b3a11ad9dc0a91cefedfbfe32379366f89" gracePeriod=600 Jan 29 12:11:35 crc kubenswrapper[4593]: I0129 12:11:35.063857 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="e17e203ea610856274105cc5fc7a47b3a11ad9dc0a91cefedfbfe32379366f89" exitCode=0 Jan 29 12:11:35 crc kubenswrapper[4593]: I0129 12:11:35.064043 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"e17e203ea610856274105cc5fc7a47b3a11ad9dc0a91cefedfbfe32379366f89"} Jan 29 12:11:35 crc kubenswrapper[4593]: I0129 12:11:35.065362 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e"} Jan 29 12:11:35 crc kubenswrapper[4593]: I0129 12:11:35.065462 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:11:39 crc kubenswrapper[4593]: I0129 12:11:39.898311 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:39 crc kubenswrapper[4593]: I0129 12:11:39.972107 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4zmb4"] Jan 29 12:11:40 crc kubenswrapper[4593]: I0129 12:11:40.121365 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4zmb4" podUID="61af0d72-8d15-4bf9-90f3-514d5a35adeb" containerName="registry-server" containerID="cri-o://d95a803073d6be732010713f64b21e2542e0573ccca5a3e98a37ffc8b97ffb0a" gracePeriod=2 Jan 29 12:11:41 crc kubenswrapper[4593]: I0129 12:11:41.137281 4593 generic.go:334] "Generic (PLEG): container finished" podID="61af0d72-8d15-4bf9-90f3-514d5a35adeb" containerID="d95a803073d6be732010713f64b21e2542e0573ccca5a3e98a37ffc8b97ffb0a" exitCode=0 Jan 29 12:11:41 crc kubenswrapper[4593]: I0129 12:11:41.137379 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zmb4" event={"ID":"61af0d72-8d15-4bf9-90f3-514d5a35adeb","Type":"ContainerDied","Data":"d95a803073d6be732010713f64b21e2542e0573ccca5a3e98a37ffc8b97ffb0a"} Jan 29 12:11:41 crc kubenswrapper[4593]: I0129 12:11:41.137649 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zmb4" event={"ID":"61af0d72-8d15-4bf9-90f3-514d5a35adeb","Type":"ContainerDied","Data":"0f8b5557b97ae87240ce95f6ce1826bf3eddc35e903219d0aa779451e8a2b146"} Jan 29 12:11:41 crc kubenswrapper[4593]: I0129 12:11:41.137700 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f8b5557b97ae87240ce95f6ce1826bf3eddc35e903219d0aa779451e8a2b146" Jan 29 12:11:41 crc kubenswrapper[4593]: I0129 12:11:41.189551 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:41 crc kubenswrapper[4593]: I0129 12:11:41.339042 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61af0d72-8d15-4bf9-90f3-514d5a35adeb-catalog-content\") pod \"61af0d72-8d15-4bf9-90f3-514d5a35adeb\" (UID: \"61af0d72-8d15-4bf9-90f3-514d5a35adeb\") " Jan 29 12:11:41 crc kubenswrapper[4593]: I0129 12:11:41.339186 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcwj7\" (UniqueName: \"kubernetes.io/projected/61af0d72-8d15-4bf9-90f3-514d5a35adeb-kube-api-access-rcwj7\") pod \"61af0d72-8d15-4bf9-90f3-514d5a35adeb\" (UID: \"61af0d72-8d15-4bf9-90f3-514d5a35adeb\") " Jan 29 12:11:41 crc kubenswrapper[4593]: I0129 12:11:41.339252 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61af0d72-8d15-4bf9-90f3-514d5a35adeb-utilities\") pod \"61af0d72-8d15-4bf9-90f3-514d5a35adeb\" (UID: \"61af0d72-8d15-4bf9-90f3-514d5a35adeb\") " Jan 29 12:11:41 crc kubenswrapper[4593]: I0129 12:11:41.341556 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61af0d72-8d15-4bf9-90f3-514d5a35adeb-utilities" (OuterVolumeSpecName: "utilities") pod "61af0d72-8d15-4bf9-90f3-514d5a35adeb" (UID: "61af0d72-8d15-4bf9-90f3-514d5a35adeb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:11:41 crc kubenswrapper[4593]: I0129 12:11:41.391262 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61af0d72-8d15-4bf9-90f3-514d5a35adeb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61af0d72-8d15-4bf9-90f3-514d5a35adeb" (UID: "61af0d72-8d15-4bf9-90f3-514d5a35adeb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:11:41 crc kubenswrapper[4593]: I0129 12:11:41.397605 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61af0d72-8d15-4bf9-90f3-514d5a35adeb-kube-api-access-rcwj7" (OuterVolumeSpecName: "kube-api-access-rcwj7") pod "61af0d72-8d15-4bf9-90f3-514d5a35adeb" (UID: "61af0d72-8d15-4bf9-90f3-514d5a35adeb"). InnerVolumeSpecName "kube-api-access-rcwj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:11:41 crc kubenswrapper[4593]: I0129 12:11:41.442116 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61af0d72-8d15-4bf9-90f3-514d5a35adeb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:11:41 crc kubenswrapper[4593]: I0129 12:11:41.442581 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcwj7\" (UniqueName: \"kubernetes.io/projected/61af0d72-8d15-4bf9-90f3-514d5a35adeb-kube-api-access-rcwj7\") on node \"crc\" DevicePath \"\"" Jan 29 12:11:41 crc kubenswrapper[4593]: I0129 12:11:41.442687 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61af0d72-8d15-4bf9-90f3-514d5a35adeb-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:11:42 crc kubenswrapper[4593]: I0129 12:11:42.147400 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:42 crc kubenswrapper[4593]: I0129 12:11:42.204270 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4zmb4"] Jan 29 12:11:42 crc kubenswrapper[4593]: I0129 12:11:42.210020 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4zmb4"] Jan 29 12:11:43 crc kubenswrapper[4593]: I0129 12:11:43.087394 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61af0d72-8d15-4bf9-90f3-514d5a35adeb" path="/var/lib/kubelet/pods/61af0d72-8d15-4bf9-90f3-514d5a35adeb/volumes" Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.362557 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-98gxd"] Jan 29 12:12:53 crc kubenswrapper[4593]: E0129 12:12:53.364069 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61af0d72-8d15-4bf9-90f3-514d5a35adeb" containerName="extract-content" Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.364091 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="61af0d72-8d15-4bf9-90f3-514d5a35adeb" containerName="extract-content" Jan 29 12:12:53 crc kubenswrapper[4593]: E0129 12:12:53.364110 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61af0d72-8d15-4bf9-90f3-514d5a35adeb" containerName="extract-utilities" Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.364122 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="61af0d72-8d15-4bf9-90f3-514d5a35adeb" containerName="extract-utilities" Jan 29 12:12:53 crc kubenswrapper[4593]: E0129 12:12:53.364159 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61af0d72-8d15-4bf9-90f3-514d5a35adeb" containerName="registry-server" Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.364172 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="61af0d72-8d15-4bf9-90f3-514d5a35adeb" containerName="registry-server" Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.364498 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="61af0d72-8d15-4bf9-90f3-514d5a35adeb" containerName="registry-server" Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.370349 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.413291 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-98gxd"] Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.433704 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eaac92f-649f-4974-8386-456b6bd43311-utilities\") pod \"redhat-marketplace-98gxd\" (UID: \"8eaac92f-649f-4974-8386-456b6bd43311\") " pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.434228 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eaac92f-649f-4974-8386-456b6bd43311-catalog-content\") pod \"redhat-marketplace-98gxd\" (UID: \"8eaac92f-649f-4974-8386-456b6bd43311\") " pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.434464 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f997n\" (UniqueName: \"kubernetes.io/projected/8eaac92f-649f-4974-8386-456b6bd43311-kube-api-access-f997n\") pod \"redhat-marketplace-98gxd\" (UID: \"8eaac92f-649f-4974-8386-456b6bd43311\") " pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.537210 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eaac92f-649f-4974-8386-456b6bd43311-catalog-content\") pod \"redhat-marketplace-98gxd\" (UID: \"8eaac92f-649f-4974-8386-456b6bd43311\") " pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.537331 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f997n\" (UniqueName: \"kubernetes.io/projected/8eaac92f-649f-4974-8386-456b6bd43311-kube-api-access-f997n\") pod \"redhat-marketplace-98gxd\" (UID: \"8eaac92f-649f-4974-8386-456b6bd43311\") " pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.537778 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eaac92f-649f-4974-8386-456b6bd43311-catalog-content\") pod \"redhat-marketplace-98gxd\" (UID: \"8eaac92f-649f-4974-8386-456b6bd43311\") " pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.537802 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eaac92f-649f-4974-8386-456b6bd43311-utilities\") pod \"redhat-marketplace-98gxd\" (UID: \"8eaac92f-649f-4974-8386-456b6bd43311\") " pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.538090 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eaac92f-649f-4974-8386-456b6bd43311-utilities\") pod \"redhat-marketplace-98gxd\" (UID: \"8eaac92f-649f-4974-8386-456b6bd43311\") " pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.562609 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f997n\" (UniqueName: \"kubernetes.io/projected/8eaac92f-649f-4974-8386-456b6bd43311-kube-api-access-f997n\") pod \"redhat-marketplace-98gxd\" (UID: \"8eaac92f-649f-4974-8386-456b6bd43311\") " pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.743291 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:12:54 crc kubenswrapper[4593]: I0129 12:12:54.319404 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-98gxd"] Jan 29 12:12:54 crc kubenswrapper[4593]: I0129 12:12:54.805502 4593 generic.go:334] "Generic (PLEG): container finished" podID="8eaac92f-649f-4974-8386-456b6bd43311" containerID="c3b74104dc93826cfe06392e55e7e7f73d8560c64b8c4ca083369d0e06d09e24" exitCode=0 Jan 29 12:12:54 crc kubenswrapper[4593]: I0129 12:12:54.809449 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98gxd" event={"ID":"8eaac92f-649f-4974-8386-456b6bd43311","Type":"ContainerDied","Data":"c3b74104dc93826cfe06392e55e7e7f73d8560c64b8c4ca083369d0e06d09e24"} Jan 29 12:12:54 crc kubenswrapper[4593]: I0129 12:12:54.809578 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98gxd" event={"ID":"8eaac92f-649f-4974-8386-456b6bd43311","Type":"ContainerStarted","Data":"9ad4c1e630bd2cb149d0ba952ca91f032d6db8c71bb5a35438114e8234485e71"} Jan 29 12:12:55 crc kubenswrapper[4593]: I0129 12:12:55.816936 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98gxd" event={"ID":"8eaac92f-649f-4974-8386-456b6bd43311","Type":"ContainerStarted","Data":"c63281faa233979d4428e6704008c949b1b8e1f15d90274dc988641299acee33"} Jan 29 12:12:56 crc kubenswrapper[4593]: I0129 12:12:56.368325 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fvf74"] Jan 29 12:12:56 crc kubenswrapper[4593]: I0129 12:12:56.370562 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:12:56 crc kubenswrapper[4593]: I0129 12:12:56.380298 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fvf74"] Jan 29 12:12:56 crc kubenswrapper[4593]: I0129 12:12:56.417509 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5m2g\" (UniqueName: \"kubernetes.io/projected/b0685d5b-09d9-4cb1-86d0-89f46550f541-kube-api-access-x5m2g\") pod \"community-operators-fvf74\" (UID: \"b0685d5b-09d9-4cb1-86d0-89f46550f541\") " pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:12:56 crc kubenswrapper[4593]: I0129 12:12:56.417659 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0685d5b-09d9-4cb1-86d0-89f46550f541-catalog-content\") pod \"community-operators-fvf74\" (UID: \"b0685d5b-09d9-4cb1-86d0-89f46550f541\") " pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:12:56 crc kubenswrapper[4593]: I0129 12:12:56.417774 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0685d5b-09d9-4cb1-86d0-89f46550f541-utilities\") pod \"community-operators-fvf74\" (UID: \"b0685d5b-09d9-4cb1-86d0-89f46550f541\") " pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:12:56 crc kubenswrapper[4593]: I0129 12:12:56.520228 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0685d5b-09d9-4cb1-86d0-89f46550f541-utilities\") pod \"community-operators-fvf74\" (UID: \"b0685d5b-09d9-4cb1-86d0-89f46550f541\") " pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:12:56 crc kubenswrapper[4593]: I0129 12:12:56.520419 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5m2g\" (UniqueName: \"kubernetes.io/projected/b0685d5b-09d9-4cb1-86d0-89f46550f541-kube-api-access-x5m2g\") pod \"community-operators-fvf74\" (UID: \"b0685d5b-09d9-4cb1-86d0-89f46550f541\") " pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:12:56 crc kubenswrapper[4593]: I0129 12:12:56.520459 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0685d5b-09d9-4cb1-86d0-89f46550f541-catalog-content\") pod \"community-operators-fvf74\" (UID: \"b0685d5b-09d9-4cb1-86d0-89f46550f541\") " pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:12:56 crc kubenswrapper[4593]: I0129 12:12:56.520916 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0685d5b-09d9-4cb1-86d0-89f46550f541-catalog-content\") pod \"community-operators-fvf74\" (UID: \"b0685d5b-09d9-4cb1-86d0-89f46550f541\") " pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:12:56 crc kubenswrapper[4593]: I0129 12:12:56.521124 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0685d5b-09d9-4cb1-86d0-89f46550f541-utilities\") pod \"community-operators-fvf74\" (UID: \"b0685d5b-09d9-4cb1-86d0-89f46550f541\") " pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:12:56 crc kubenswrapper[4593]: I0129 12:12:56.827036 4593 generic.go:334] "Generic (PLEG): container finished" podID="8eaac92f-649f-4974-8386-456b6bd43311" containerID="c63281faa233979d4428e6704008c949b1b8e1f15d90274dc988641299acee33" exitCode=0 Jan 29 12:12:56 crc kubenswrapper[4593]: I0129 12:12:56.827096 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98gxd" event={"ID":"8eaac92f-649f-4974-8386-456b6bd43311","Type":"ContainerDied","Data":"c63281faa233979d4428e6704008c949b1b8e1f15d90274dc988641299acee33"} Jan 29 12:12:57 crc kubenswrapper[4593]: I0129 12:12:57.068716 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5m2g\" (UniqueName: \"kubernetes.io/projected/b0685d5b-09d9-4cb1-86d0-89f46550f541-kube-api-access-x5m2g\") pod \"community-operators-fvf74\" (UID: \"b0685d5b-09d9-4cb1-86d0-89f46550f541\") " pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:12:57 crc kubenswrapper[4593]: I0129 12:12:57.312969 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:12:57 crc kubenswrapper[4593]: I0129 12:12:57.807473 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fvf74"] Jan 29 12:12:57 crc kubenswrapper[4593]: I0129 12:12:57.838544 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98gxd" event={"ID":"8eaac92f-649f-4974-8386-456b6bd43311","Type":"ContainerStarted","Data":"2ea111a54bbbdf87d668706835a05d7ec48cd68970c1cfb770cb5ccbc940f9f2"} Jan 29 12:12:57 crc kubenswrapper[4593]: I0129 12:12:57.843131 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvf74" event={"ID":"b0685d5b-09d9-4cb1-86d0-89f46550f541","Type":"ContainerStarted","Data":"d1eb148f0820d4908158e1d29cd56e7eb7cb9dbbe8b7a6b3f032a7bdbf59b266"} Jan 29 12:12:57 crc kubenswrapper[4593]: I0129 12:12:57.866554 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-98gxd" podStartSLOduration=2.290520244 podStartE2EDuration="4.866534985s" podCreationTimestamp="2026-01-29 12:12:53 +0000 UTC" firstStartedPulling="2026-01-29 12:12:54.80897133 +0000 UTC m=+4440.682005521" lastFinishedPulling="2026-01-29 12:12:57.384986071 +0000 UTC m=+4443.258020262" observedRunningTime="2026-01-29 12:12:57.863243046 +0000 UTC m=+4443.736277257" watchObservedRunningTime="2026-01-29 12:12:57.866534985 +0000 UTC m=+4443.739569176" Jan 29 12:12:58 crc kubenswrapper[4593]: I0129 12:12:58.853051 4593 generic.go:334] "Generic (PLEG): container finished" podID="b0685d5b-09d9-4cb1-86d0-89f46550f541" containerID="7225165f8868f0f3ba875fe9ca902a424a8636587d164d157b110b59c672bfae" exitCode=0 Jan 29 12:12:58 crc kubenswrapper[4593]: I0129 12:12:58.853160 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvf74" event={"ID":"b0685d5b-09d9-4cb1-86d0-89f46550f541","Type":"ContainerDied","Data":"7225165f8868f0f3ba875fe9ca902a424a8636587d164d157b110b59c672bfae"} Jan 29 12:12:59 crc kubenswrapper[4593]: I0129 12:12:59.865391 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvf74" event={"ID":"b0685d5b-09d9-4cb1-86d0-89f46550f541","Type":"ContainerStarted","Data":"dc808ebf7871452c23a3c7c7c810cf08c86316aedfb66cb866baddf8bdf8102d"} Jan 29 12:13:01 crc kubenswrapper[4593]: I0129 12:13:01.882562 4593 generic.go:334] "Generic (PLEG): container finished" podID="b0685d5b-09d9-4cb1-86d0-89f46550f541" containerID="dc808ebf7871452c23a3c7c7c810cf08c86316aedfb66cb866baddf8bdf8102d" exitCode=0 Jan 29 12:13:01 crc kubenswrapper[4593]: I0129 12:13:01.882653 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvf74" event={"ID":"b0685d5b-09d9-4cb1-86d0-89f46550f541","Type":"ContainerDied","Data":"dc808ebf7871452c23a3c7c7c810cf08c86316aedfb66cb866baddf8bdf8102d"} Jan 29 12:13:02 crc kubenswrapper[4593]: I0129 12:13:02.893686 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvf74" event={"ID":"b0685d5b-09d9-4cb1-86d0-89f46550f541","Type":"ContainerStarted","Data":"17951efd32a8173b9b72530e3dcb68b000d7c6c8c8243276db5d49980e385a58"} Jan 29 12:13:02 crc kubenswrapper[4593]: I0129 12:13:02.918268 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fvf74" podStartSLOduration=3.340558313 podStartE2EDuration="6.918244525s" podCreationTimestamp="2026-01-29 12:12:56 +0000 UTC" firstStartedPulling="2026-01-29 12:12:58.854856007 +0000 UTC m=+4444.727890208" lastFinishedPulling="2026-01-29 12:13:02.432542219 +0000 UTC m=+4448.305576420" observedRunningTime="2026-01-29 12:13:02.911219495 +0000 UTC m=+4448.784253686" watchObservedRunningTime="2026-01-29 12:13:02.918244525 +0000 UTC m=+4448.791278716" Jan 29 12:13:03 crc kubenswrapper[4593]: I0129 12:13:03.744544 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:13:03 crc kubenswrapper[4593]: I0129 12:13:03.744588 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:13:03 crc kubenswrapper[4593]: I0129 12:13:03.801584 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:13:03 crc kubenswrapper[4593]: I0129 12:13:03.965998 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:13:05 crc kubenswrapper[4593]: I0129 12:13:05.131056 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-98gxd"] Jan 29 12:13:05 crc kubenswrapper[4593]: I0129 12:13:05.924874 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-98gxd" podUID="8eaac92f-649f-4974-8386-456b6bd43311" containerName="registry-server" containerID="cri-o://2ea111a54bbbdf87d668706835a05d7ec48cd68970c1cfb770cb5ccbc940f9f2" gracePeriod=2 Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.451259 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.522546 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eaac92f-649f-4974-8386-456b6bd43311-utilities\") pod \"8eaac92f-649f-4974-8386-456b6bd43311\" (UID: \"8eaac92f-649f-4974-8386-456b6bd43311\") " Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.522868 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eaac92f-649f-4974-8386-456b6bd43311-catalog-content\") pod \"8eaac92f-649f-4974-8386-456b6bd43311\" (UID: \"8eaac92f-649f-4974-8386-456b6bd43311\") " Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.522944 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f997n\" (UniqueName: \"kubernetes.io/projected/8eaac92f-649f-4974-8386-456b6bd43311-kube-api-access-f997n\") pod \"8eaac92f-649f-4974-8386-456b6bd43311\" (UID: \"8eaac92f-649f-4974-8386-456b6bd43311\") " Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.524236 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eaac92f-649f-4974-8386-456b6bd43311-utilities" (OuterVolumeSpecName: "utilities") pod "8eaac92f-649f-4974-8386-456b6bd43311" (UID: "8eaac92f-649f-4974-8386-456b6bd43311"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.530220 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eaac92f-649f-4974-8386-456b6bd43311-kube-api-access-f997n" (OuterVolumeSpecName: "kube-api-access-f997n") pod "8eaac92f-649f-4974-8386-456b6bd43311" (UID: "8eaac92f-649f-4974-8386-456b6bd43311"). InnerVolumeSpecName "kube-api-access-f997n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.566739 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eaac92f-649f-4974-8386-456b6bd43311-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8eaac92f-649f-4974-8386-456b6bd43311" (UID: "8eaac92f-649f-4974-8386-456b6bd43311"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.625147 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eaac92f-649f-4974-8386-456b6bd43311-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.625179 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eaac92f-649f-4974-8386-456b6bd43311-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.625191 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f997n\" (UniqueName: \"kubernetes.io/projected/8eaac92f-649f-4974-8386-456b6bd43311-kube-api-access-f997n\") on node \"crc\" DevicePath \"\"" Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.937837 4593 generic.go:334] "Generic (PLEG): container finished" podID="8eaac92f-649f-4974-8386-456b6bd43311" containerID="2ea111a54bbbdf87d668706835a05d7ec48cd68970c1cfb770cb5ccbc940f9f2" exitCode=0 Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.937904 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.937921 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98gxd" event={"ID":"8eaac92f-649f-4974-8386-456b6bd43311","Type":"ContainerDied","Data":"2ea111a54bbbdf87d668706835a05d7ec48cd68970c1cfb770cb5ccbc940f9f2"} Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.938085 4593 scope.go:117] "RemoveContainer" containerID="2ea111a54bbbdf87d668706835a05d7ec48cd68970c1cfb770cb5ccbc940f9f2" Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.938272 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98gxd" event={"ID":"8eaac92f-649f-4974-8386-456b6bd43311","Type":"ContainerDied","Data":"9ad4c1e630bd2cb149d0ba952ca91f032d6db8c71bb5a35438114e8234485e71"} Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.967990 4593 scope.go:117] "RemoveContainer" containerID="c63281faa233979d4428e6704008c949b1b8e1f15d90274dc988641299acee33" Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.996329 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-98gxd"] Jan 29 12:13:07 crc kubenswrapper[4593]: I0129 12:13:07.007289 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-98gxd"] Jan 29 12:13:07 crc kubenswrapper[4593]: I0129 12:13:07.018831 4593 scope.go:117] "RemoveContainer" containerID="c3b74104dc93826cfe06392e55e7e7f73d8560c64b8c4ca083369d0e06d09e24" Jan 29 12:13:07 crc kubenswrapper[4593]: I0129 12:13:07.057459 4593 scope.go:117] "RemoveContainer" containerID="2ea111a54bbbdf87d668706835a05d7ec48cd68970c1cfb770cb5ccbc940f9f2" Jan 29 12:13:07 crc kubenswrapper[4593]: E0129 12:13:07.058191 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ea111a54bbbdf87d668706835a05d7ec48cd68970c1cfb770cb5ccbc940f9f2\": container with ID starting with 2ea111a54bbbdf87d668706835a05d7ec48cd68970c1cfb770cb5ccbc940f9f2 not found: ID does not exist" containerID="2ea111a54bbbdf87d668706835a05d7ec48cd68970c1cfb770cb5ccbc940f9f2" Jan 29 12:13:07 crc kubenswrapper[4593]: I0129 12:13:07.058236 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ea111a54bbbdf87d668706835a05d7ec48cd68970c1cfb770cb5ccbc940f9f2"} err="failed to get container status \"2ea111a54bbbdf87d668706835a05d7ec48cd68970c1cfb770cb5ccbc940f9f2\": rpc error: code = NotFound desc = could not find container \"2ea111a54bbbdf87d668706835a05d7ec48cd68970c1cfb770cb5ccbc940f9f2\": container with ID starting with 2ea111a54bbbdf87d668706835a05d7ec48cd68970c1cfb770cb5ccbc940f9f2 not found: ID does not exist" Jan 29 12:13:07 crc kubenswrapper[4593]: I0129 12:13:07.058263 4593 scope.go:117] "RemoveContainer" containerID="c63281faa233979d4428e6704008c949b1b8e1f15d90274dc988641299acee33" Jan 29 12:13:07 crc kubenswrapper[4593]: E0129 12:13:07.058591 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c63281faa233979d4428e6704008c949b1b8e1f15d90274dc988641299acee33\": container with ID starting with c63281faa233979d4428e6704008c949b1b8e1f15d90274dc988641299acee33 not found: ID does not exist" containerID="c63281faa233979d4428e6704008c949b1b8e1f15d90274dc988641299acee33" Jan 29 12:13:07 crc kubenswrapper[4593]: I0129 12:13:07.058621 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c63281faa233979d4428e6704008c949b1b8e1f15d90274dc988641299acee33"} err="failed to get container status \"c63281faa233979d4428e6704008c949b1b8e1f15d90274dc988641299acee33\": rpc error: code = NotFound desc = could not find container \"c63281faa233979d4428e6704008c949b1b8e1f15d90274dc988641299acee33\": container with ID starting with c63281faa233979d4428e6704008c949b1b8e1f15d90274dc988641299acee33 not found: ID does not exist" Jan 29 12:13:07 crc kubenswrapper[4593]: I0129 12:13:07.058654 4593 scope.go:117] "RemoveContainer" containerID="c3b74104dc93826cfe06392e55e7e7f73d8560c64b8c4ca083369d0e06d09e24" Jan 29 12:13:07 crc kubenswrapper[4593]: E0129 12:13:07.058969 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3b74104dc93826cfe06392e55e7e7f73d8560c64b8c4ca083369d0e06d09e24\": container with ID starting with c3b74104dc93826cfe06392e55e7e7f73d8560c64b8c4ca083369d0e06d09e24 not found: ID does not exist" containerID="c3b74104dc93826cfe06392e55e7e7f73d8560c64b8c4ca083369d0e06d09e24" Jan 29 12:13:07 crc kubenswrapper[4593]: I0129 12:13:07.059141 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3b74104dc93826cfe06392e55e7e7f73d8560c64b8c4ca083369d0e06d09e24"} err="failed to get container status \"c3b74104dc93826cfe06392e55e7e7f73d8560c64b8c4ca083369d0e06d09e24\": rpc error: code = NotFound desc = could not find container \"c3b74104dc93826cfe06392e55e7e7f73d8560c64b8c4ca083369d0e06d09e24\": container with ID starting with c3b74104dc93826cfe06392e55e7e7f73d8560c64b8c4ca083369d0e06d09e24 not found: ID does not exist" Jan 29 12:13:07 crc kubenswrapper[4593]: I0129 12:13:07.095409 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eaac92f-649f-4974-8386-456b6bd43311" path="/var/lib/kubelet/pods/8eaac92f-649f-4974-8386-456b6bd43311/volumes" Jan 29 12:13:07 crc kubenswrapper[4593]: I0129 12:13:07.313784 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:13:07 crc kubenswrapper[4593]: I0129 12:13:07.314339 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:13:07 crc kubenswrapper[4593]: I0129 12:13:07.808412 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:13:08 crc kubenswrapper[4593]: I0129 12:13:08.004205 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:13:09 crc kubenswrapper[4593]: I0129 12:13:09.528999 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fvf74"] Jan 29 12:13:09 crc kubenswrapper[4593]: I0129 12:13:09.969974 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fvf74" podUID="b0685d5b-09d9-4cb1-86d0-89f46550f541" containerName="registry-server" containerID="cri-o://17951efd32a8173b9b72530e3dcb68b000d7c6c8c8243276db5d49980e385a58" gracePeriod=2 Jan 29 12:13:10 crc kubenswrapper[4593]: I0129 12:13:10.416086 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:13:10 crc kubenswrapper[4593]: I0129 12:13:10.606893 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5m2g\" (UniqueName: \"kubernetes.io/projected/b0685d5b-09d9-4cb1-86d0-89f46550f541-kube-api-access-x5m2g\") pod \"b0685d5b-09d9-4cb1-86d0-89f46550f541\" (UID: \"b0685d5b-09d9-4cb1-86d0-89f46550f541\") " Jan 29 12:13:10 crc kubenswrapper[4593]: I0129 12:13:10.606986 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0685d5b-09d9-4cb1-86d0-89f46550f541-catalog-content\") pod \"b0685d5b-09d9-4cb1-86d0-89f46550f541\" (UID: \"b0685d5b-09d9-4cb1-86d0-89f46550f541\") " Jan 29 12:13:10 crc kubenswrapper[4593]: I0129 12:13:10.607189 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0685d5b-09d9-4cb1-86d0-89f46550f541-utilities\") pod \"b0685d5b-09d9-4cb1-86d0-89f46550f541\" (UID: \"b0685d5b-09d9-4cb1-86d0-89f46550f541\") " Jan 29 12:13:10 crc kubenswrapper[4593]: I0129 12:13:10.608660 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0685d5b-09d9-4cb1-86d0-89f46550f541-utilities" (OuterVolumeSpecName: "utilities") pod "b0685d5b-09d9-4cb1-86d0-89f46550f541" (UID: "b0685d5b-09d9-4cb1-86d0-89f46550f541"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:13:10 crc kubenswrapper[4593]: I0129 12:13:10.612776 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0685d5b-09d9-4cb1-86d0-89f46550f541-kube-api-access-x5m2g" (OuterVolumeSpecName: "kube-api-access-x5m2g") pod "b0685d5b-09d9-4cb1-86d0-89f46550f541" (UID: "b0685d5b-09d9-4cb1-86d0-89f46550f541"). InnerVolumeSpecName "kube-api-access-x5m2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:13:10 crc kubenswrapper[4593]: I0129 12:13:10.709850 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5m2g\" (UniqueName: \"kubernetes.io/projected/b0685d5b-09d9-4cb1-86d0-89f46550f541-kube-api-access-x5m2g\") on node \"crc\" DevicePath \"\"" Jan 29 12:13:10 crc kubenswrapper[4593]: I0129 12:13:10.709883 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0685d5b-09d9-4cb1-86d0-89f46550f541-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:13:10 crc kubenswrapper[4593]: I0129 12:13:10.981999 4593 generic.go:334] "Generic (PLEG): container finished" podID="b0685d5b-09d9-4cb1-86d0-89f46550f541" containerID="17951efd32a8173b9b72530e3dcb68b000d7c6c8c8243276db5d49980e385a58" exitCode=0 Jan 29 12:13:10 crc kubenswrapper[4593]: I0129 12:13:10.982066 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvf74" event={"ID":"b0685d5b-09d9-4cb1-86d0-89f46550f541","Type":"ContainerDied","Data":"17951efd32a8173b9b72530e3dcb68b000d7c6c8c8243276db5d49980e385a58"} Jan 29 12:13:10 crc kubenswrapper[4593]: I0129 12:13:10.982113 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:13:10 crc kubenswrapper[4593]: I0129 12:13:10.982139 4593 scope.go:117] "RemoveContainer" containerID="17951efd32a8173b9b72530e3dcb68b000d7c6c8c8243276db5d49980e385a58" Jan 29 12:13:10 crc kubenswrapper[4593]: I0129 12:13:10.982119 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvf74" event={"ID":"b0685d5b-09d9-4cb1-86d0-89f46550f541","Type":"ContainerDied","Data":"d1eb148f0820d4908158e1d29cd56e7eb7cb9dbbe8b7a6b3f032a7bdbf59b266"} Jan 29 12:13:11 crc kubenswrapper[4593]: I0129 12:13:11.006943 4593 scope.go:117] "RemoveContainer" containerID="dc808ebf7871452c23a3c7c7c810cf08c86316aedfb66cb866baddf8bdf8102d" Jan 29 12:13:11 crc kubenswrapper[4593]: I0129 12:13:11.036601 4593 scope.go:117] "RemoveContainer" containerID="7225165f8868f0f3ba875fe9ca902a424a8636587d164d157b110b59c672bfae" Jan 29 12:13:11 crc kubenswrapper[4593]: I0129 12:13:11.095151 4593 scope.go:117] "RemoveContainer" containerID="17951efd32a8173b9b72530e3dcb68b000d7c6c8c8243276db5d49980e385a58" Jan 29 12:13:11 crc kubenswrapper[4593]: E0129 12:13:11.096011 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17951efd32a8173b9b72530e3dcb68b000d7c6c8c8243276db5d49980e385a58\": container with ID starting with 17951efd32a8173b9b72530e3dcb68b000d7c6c8c8243276db5d49980e385a58 not found: ID does not exist" containerID="17951efd32a8173b9b72530e3dcb68b000d7c6c8c8243276db5d49980e385a58" Jan 29 12:13:11 crc kubenswrapper[4593]: I0129 12:13:11.096050 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17951efd32a8173b9b72530e3dcb68b000d7c6c8c8243276db5d49980e385a58"} err="failed to get container status \"17951efd32a8173b9b72530e3dcb68b000d7c6c8c8243276db5d49980e385a58\": rpc error: code = NotFound desc = could not find container \"17951efd32a8173b9b72530e3dcb68b000d7c6c8c8243276db5d49980e385a58\": container with ID starting with 17951efd32a8173b9b72530e3dcb68b000d7c6c8c8243276db5d49980e385a58 not found: ID does not exist" Jan 29 12:13:11 crc kubenswrapper[4593]: I0129 12:13:11.096083 4593 scope.go:117] "RemoveContainer" containerID="dc808ebf7871452c23a3c7c7c810cf08c86316aedfb66cb866baddf8bdf8102d" Jan 29 12:13:11 crc kubenswrapper[4593]: E0129 12:13:11.096753 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc808ebf7871452c23a3c7c7c810cf08c86316aedfb66cb866baddf8bdf8102d\": container with ID starting with dc808ebf7871452c23a3c7c7c810cf08c86316aedfb66cb866baddf8bdf8102d not found: ID does not exist" containerID="dc808ebf7871452c23a3c7c7c810cf08c86316aedfb66cb866baddf8bdf8102d" Jan 29 12:13:11 crc kubenswrapper[4593]: I0129 12:13:11.096779 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc808ebf7871452c23a3c7c7c810cf08c86316aedfb66cb866baddf8bdf8102d"} err="failed to get container status \"dc808ebf7871452c23a3c7c7c810cf08c86316aedfb66cb866baddf8bdf8102d\": rpc error: code = NotFound desc = could not find container \"dc808ebf7871452c23a3c7c7c810cf08c86316aedfb66cb866baddf8bdf8102d\": container with ID starting with dc808ebf7871452c23a3c7c7c810cf08c86316aedfb66cb866baddf8bdf8102d not found: ID does not exist" Jan 29 12:13:11 crc kubenswrapper[4593]: I0129 12:13:11.096795 4593 scope.go:117] "RemoveContainer" containerID="7225165f8868f0f3ba875fe9ca902a424a8636587d164d157b110b59c672bfae" Jan 29 12:13:11 crc kubenswrapper[4593]: E0129 12:13:11.097114 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7225165f8868f0f3ba875fe9ca902a424a8636587d164d157b110b59c672bfae\": container with ID starting with 7225165f8868f0f3ba875fe9ca902a424a8636587d164d157b110b59c672bfae not found: ID does not exist" containerID="7225165f8868f0f3ba875fe9ca902a424a8636587d164d157b110b59c672bfae" Jan 29 12:13:11 crc kubenswrapper[4593]: I0129 12:13:11.097140 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7225165f8868f0f3ba875fe9ca902a424a8636587d164d157b110b59c672bfae"} err="failed to get container status \"7225165f8868f0f3ba875fe9ca902a424a8636587d164d157b110b59c672bfae\": rpc error: code = NotFound desc = could not find container \"7225165f8868f0f3ba875fe9ca902a424a8636587d164d157b110b59c672bfae\": container with ID starting with 7225165f8868f0f3ba875fe9ca902a424a8636587d164d157b110b59c672bfae not found: ID does not exist" Jan 29 12:13:11 crc kubenswrapper[4593]: I0129 12:13:11.216584 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0685d5b-09d9-4cb1-86d0-89f46550f541-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0685d5b-09d9-4cb1-86d0-89f46550f541" (UID: "b0685d5b-09d9-4cb1-86d0-89f46550f541"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:13:11 crc kubenswrapper[4593]: I0129 12:13:11.220588 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0685d5b-09d9-4cb1-86d0-89f46550f541-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:13:11 crc kubenswrapper[4593]: I0129 12:13:11.317832 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fvf74"] Jan 29 12:13:11 crc kubenswrapper[4593]: I0129 12:13:11.325251 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fvf74"] Jan 29 12:13:13 crc kubenswrapper[4593]: I0129 12:13:13.094930 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0685d5b-09d9-4cb1-86d0-89f46550f541" path="/var/lib/kubelet/pods/b0685d5b-09d9-4cb1-86d0-89f46550f541/volumes" Jan 29 12:13:19 crc kubenswrapper[4593]: I0129 12:13:19.057005 4593 generic.go:334] "Generic (PLEG): container finished" podID="d5ea9892-a149-4cfe-bb9c-ef636eacd125" containerID="f1bbc49dcc0cd36e38a7fd4617bfb0fd01fe811e0e734a91b4f25ae6b23bbeaf" exitCode=0 Jan 29 12:13:19 crc kubenswrapper[4593]: I0129 12:13:19.057072 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d5ea9892-a149-4cfe-bb9c-ef636eacd125","Type":"ContainerDied","Data":"f1bbc49dcc0cd36e38a7fd4617bfb0fd01fe811e0e734a91b4f25ae6b23bbeaf"} Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.450297 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.600944 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d5ea9892-a149-4cfe-bb9c-ef636eacd125-test-operator-ephemeral-temporary\") pod \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.601016 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5ea9892-a149-4cfe-bb9c-ef636eacd125-config-data\") pod \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.601141 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-ca-certs\") pod \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.601161 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs2hc\" (UniqueName: \"kubernetes.io/projected/d5ea9892-a149-4cfe-bb9c-ef636eacd125-kube-api-access-bs2hc\") pod \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.601177 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.601207 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d5ea9892-a149-4cfe-bb9c-ef636eacd125-test-operator-ephemeral-workdir\") pod \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.601236 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d5ea9892-a149-4cfe-bb9c-ef636eacd125-openstack-config\") pod \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.601331 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-ssh-key\") pod \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.601361 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-openstack-config-secret\") pod \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.607140 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5ea9892-a149-4cfe-bb9c-ef636eacd125-config-data" (OuterVolumeSpecName: "config-data") pod "d5ea9892-a149-4cfe-bb9c-ef636eacd125" (UID: "d5ea9892-a149-4cfe-bb9c-ef636eacd125"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.607363 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5ea9892-a149-4cfe-bb9c-ef636eacd125-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "d5ea9892-a149-4cfe-bb9c-ef636eacd125" (UID: "d5ea9892-a149-4cfe-bb9c-ef636eacd125"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.607380 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5ea9892-a149-4cfe-bb9c-ef636eacd125-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "d5ea9892-a149-4cfe-bb9c-ef636eacd125" (UID: "d5ea9892-a149-4cfe-bb9c-ef636eacd125"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.609247 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "test-operator-logs") pod "d5ea9892-a149-4cfe-bb9c-ef636eacd125" (UID: "d5ea9892-a149-4cfe-bb9c-ef636eacd125"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.628822 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5ea9892-a149-4cfe-bb9c-ef636eacd125-kube-api-access-bs2hc" (OuterVolumeSpecName: "kube-api-access-bs2hc") pod "d5ea9892-a149-4cfe-bb9c-ef636eacd125" (UID: "d5ea9892-a149-4cfe-bb9c-ef636eacd125"). InnerVolumeSpecName "kube-api-access-bs2hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.653055 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d5ea9892-a149-4cfe-bb9c-ef636eacd125" (UID: "d5ea9892-a149-4cfe-bb9c-ef636eacd125"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.656179 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "d5ea9892-a149-4cfe-bb9c-ef636eacd125" (UID: "d5ea9892-a149-4cfe-bb9c-ef636eacd125"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.664389 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "d5ea9892-a149-4cfe-bb9c-ef636eacd125" (UID: "d5ea9892-a149-4cfe-bb9c-ef636eacd125"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.677744 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5ea9892-a149-4cfe-bb9c-ef636eacd125-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "d5ea9892-a149-4cfe-bb9c-ef636eacd125" (UID: "d5ea9892-a149-4cfe-bb9c-ef636eacd125"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.703410 4593 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.703454 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs2hc\" (UniqueName: \"kubernetes.io/projected/d5ea9892-a149-4cfe-bb9c-ef636eacd125-kube-api-access-bs2hc\") on node \"crc\" DevicePath \"\"" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.704566 4593 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.704591 4593 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d5ea9892-a149-4cfe-bb9c-ef636eacd125-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.704604 4593 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d5ea9892-a149-4cfe-bb9c-ef636eacd125-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.704617 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.704628 4593 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.704658 4593 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d5ea9892-a149-4cfe-bb9c-ef636eacd125-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.704671 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5ea9892-a149-4cfe-bb9c-ef636eacd125-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.730222 4593 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.808622 4593 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 29 12:13:21 crc kubenswrapper[4593]: I0129 12:13:21.079087 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 29 12:13:21 crc kubenswrapper[4593]: I0129 12:13:21.086364 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d5ea9892-a149-4cfe-bb9c-ef636eacd125","Type":"ContainerDied","Data":"bf88caa96b3fd17945a137b250bf9d7f8872b0e8469ad3aa1ab198d63888646d"} Jan 29 12:13:21 crc kubenswrapper[4593]: I0129 12:13:21.086407 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf88caa96b3fd17945a137b250bf9d7f8872b0e8469ad3aa1ab198d63888646d" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.326558 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 29 12:13:31 crc kubenswrapper[4593]: E0129 12:13:31.327712 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0685d5b-09d9-4cb1-86d0-89f46550f541" containerName="extract-utilities" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.327733 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0685d5b-09d9-4cb1-86d0-89f46550f541" containerName="extract-utilities" Jan 29 12:13:31 crc kubenswrapper[4593]: E0129 12:13:31.327775 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eaac92f-649f-4974-8386-456b6bd43311" containerName="extract-utilities" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.327784 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eaac92f-649f-4974-8386-456b6bd43311" containerName="extract-utilities" Jan 29 12:13:31 crc kubenswrapper[4593]: E0129 12:13:31.327793 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0685d5b-09d9-4cb1-86d0-89f46550f541" containerName="registry-server" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.327801 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0685d5b-09d9-4cb1-86d0-89f46550f541" containerName="registry-server" Jan 29 12:13:31 crc kubenswrapper[4593]: E0129 12:13:31.327827 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eaac92f-649f-4974-8386-456b6bd43311" containerName="extract-content" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.327833 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eaac92f-649f-4974-8386-456b6bd43311" containerName="extract-content" Jan 29 12:13:31 crc kubenswrapper[4593]: E0129 12:13:31.327848 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0685d5b-09d9-4cb1-86d0-89f46550f541" containerName="extract-content" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.327855 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0685d5b-09d9-4cb1-86d0-89f46550f541" containerName="extract-content" Jan 29 12:13:31 crc kubenswrapper[4593]: E0129 12:13:31.327868 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ea9892-a149-4cfe-bb9c-ef636eacd125" containerName="tempest-tests-tempest-tests-runner" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.327876 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ea9892-a149-4cfe-bb9c-ef636eacd125" containerName="tempest-tests-tempest-tests-runner" Jan 29 12:13:31 crc kubenswrapper[4593]: E0129 12:13:31.327891 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eaac92f-649f-4974-8386-456b6bd43311" containerName="registry-server" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.327897 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eaac92f-649f-4974-8386-456b6bd43311" containerName="registry-server" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.328104 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0685d5b-09d9-4cb1-86d0-89f46550f541" containerName="registry-server" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.328132 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5ea9892-a149-4cfe-bb9c-ef636eacd125" containerName="tempest-tests-tempest-tests-runner" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.328150 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eaac92f-649f-4974-8386-456b6bd43311" containerName="registry-server" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.328969 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.331760 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-vt7mb" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.337512 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.441241 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be3a2ae9-6f0e-459e-bd91-10a92871767c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.441371 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbrlg\" (UniqueName: \"kubernetes.io/projected/be3a2ae9-6f0e-459e-bd91-10a92871767c-kube-api-access-xbrlg\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be3a2ae9-6f0e-459e-bd91-10a92871767c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.542913 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbrlg\" (UniqueName: \"kubernetes.io/projected/be3a2ae9-6f0e-459e-bd91-10a92871767c-kube-api-access-xbrlg\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be3a2ae9-6f0e-459e-bd91-10a92871767c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.543112 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be3a2ae9-6f0e-459e-bd91-10a92871767c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.544576 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be3a2ae9-6f0e-459e-bd91-10a92871767c\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.575522 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbrlg\" (UniqueName: \"kubernetes.io/projected/be3a2ae9-6f0e-459e-bd91-10a92871767c-kube-api-access-xbrlg\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be3a2ae9-6f0e-459e-bd91-10a92871767c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.595296 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be3a2ae9-6f0e-459e-bd91-10a92871767c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.667103 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 12:13:32 crc kubenswrapper[4593]: I0129 12:13:32.150483 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 29 12:13:32 crc kubenswrapper[4593]: I0129 12:13:32.199873 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"be3a2ae9-6f0e-459e-bd91-10a92871767c","Type":"ContainerStarted","Data":"a6f153ce8021cd387a610c92bda1b1f2f68e2eea007e984dd04fdffc30f42452"} Jan 29 12:13:34 crc kubenswrapper[4593]: I0129 12:13:34.218860 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"be3a2ae9-6f0e-459e-bd91-10a92871767c","Type":"ContainerStarted","Data":"2381ee7cacc824d7c3622424877525831427de11d4cc37fe4c948c4fe154e84a"} Jan 29 12:13:34 crc kubenswrapper[4593]: I0129 12:13:34.237346 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.66563876 podStartE2EDuration="3.237327275s" podCreationTimestamp="2026-01-29 12:13:31 +0000 UTC" firstStartedPulling="2026-01-29 12:13:32.171821943 +0000 UTC m=+4478.044856134" lastFinishedPulling="2026-01-29 12:13:33.743510458 +0000 UTC m=+4479.616544649" observedRunningTime="2026-01-29 12:13:34.234723064 +0000 UTC m=+4480.107757275" watchObservedRunningTime="2026-01-29 12:13:34.237327275 +0000 UTC m=+4480.110361466" Jan 29 12:13:57 crc kubenswrapper[4593]: I0129 12:13:57.845598 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zc4pg/must-gather-htdlp"] Jan 29 12:13:57 crc kubenswrapper[4593]: I0129 12:13:57.848720 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zc4pg/must-gather-htdlp" Jan 29 12:13:57 crc kubenswrapper[4593]: I0129 12:13:57.851485 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-zc4pg"/"default-dockercfg-zg6z9" Jan 29 12:13:57 crc kubenswrapper[4593]: I0129 12:13:57.851761 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zc4pg"/"kube-root-ca.crt" Jan 29 12:13:57 crc kubenswrapper[4593]: I0129 12:13:57.851969 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zc4pg"/"openshift-service-ca.crt" Jan 29 12:13:57 crc kubenswrapper[4593]: I0129 12:13:57.917785 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zc4pg/must-gather-htdlp"] Jan 29 12:13:57 crc kubenswrapper[4593]: I0129 12:13:57.944683 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/006cda43-0b58-4970-bcf0-c355509620f8-must-gather-output\") pod \"must-gather-htdlp\" (UID: \"006cda43-0b58-4970-bcf0-c355509620f8\") " pod="openshift-must-gather-zc4pg/must-gather-htdlp" Jan 29 12:13:57 crc kubenswrapper[4593]: I0129 12:13:57.944778 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lln5t\" (UniqueName: \"kubernetes.io/projected/006cda43-0b58-4970-bcf0-c355509620f8-kube-api-access-lln5t\") pod \"must-gather-htdlp\" (UID: \"006cda43-0b58-4970-bcf0-c355509620f8\") " pod="openshift-must-gather-zc4pg/must-gather-htdlp" Jan 29 12:13:58 crc kubenswrapper[4593]: I0129 12:13:58.046923 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lln5t\" (UniqueName: \"kubernetes.io/projected/006cda43-0b58-4970-bcf0-c355509620f8-kube-api-access-lln5t\") pod \"must-gather-htdlp\" (UID: \"006cda43-0b58-4970-bcf0-c355509620f8\") " pod="openshift-must-gather-zc4pg/must-gather-htdlp" Jan 29 12:13:58 crc kubenswrapper[4593]: I0129 12:13:58.047107 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/006cda43-0b58-4970-bcf0-c355509620f8-must-gather-output\") pod \"must-gather-htdlp\" (UID: \"006cda43-0b58-4970-bcf0-c355509620f8\") " pod="openshift-must-gather-zc4pg/must-gather-htdlp" Jan 29 12:13:58 crc kubenswrapper[4593]: I0129 12:13:58.047663 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/006cda43-0b58-4970-bcf0-c355509620f8-must-gather-output\") pod \"must-gather-htdlp\" (UID: \"006cda43-0b58-4970-bcf0-c355509620f8\") " pod="openshift-must-gather-zc4pg/must-gather-htdlp" Jan 29 12:13:58 crc kubenswrapper[4593]: I0129 12:13:58.069322 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lln5t\" (UniqueName: \"kubernetes.io/projected/006cda43-0b58-4970-bcf0-c355509620f8-kube-api-access-lln5t\") pod \"must-gather-htdlp\" (UID: \"006cda43-0b58-4970-bcf0-c355509620f8\") " pod="openshift-must-gather-zc4pg/must-gather-htdlp" Jan 29 12:13:58 crc kubenswrapper[4593]: I0129 12:13:58.168440 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zc4pg/must-gather-htdlp" Jan 29 12:13:58 crc kubenswrapper[4593]: I0129 12:13:58.666919 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zc4pg/must-gather-htdlp"] Jan 29 12:13:59 crc kubenswrapper[4593]: I0129 12:13:59.466967 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zc4pg/must-gather-htdlp" event={"ID":"006cda43-0b58-4970-bcf0-c355509620f8","Type":"ContainerStarted","Data":"86239900d1d38bd4a5bf781851c2ddc657ff989932d54c44e7e343fa9cb35945"} Jan 29 12:14:03 crc kubenswrapper[4593]: I0129 12:14:03.946850 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:14:03 crc kubenswrapper[4593]: I0129 12:14:03.947627 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:14:07 crc kubenswrapper[4593]: I0129 12:14:07.571094 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zc4pg/must-gather-htdlp" event={"ID":"006cda43-0b58-4970-bcf0-c355509620f8","Type":"ContainerStarted","Data":"0a2615ec02f7acf6e4eef7d334633a655b2c7f91120bb732e5f28991053841a5"} Jan 29 12:14:07 crc kubenswrapper[4593]: I0129 12:14:07.571627 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zc4pg/must-gather-htdlp" event={"ID":"006cda43-0b58-4970-bcf0-c355509620f8","Type":"ContainerStarted","Data":"46cdce02a2dbb7b4a939e2cdd7a751400cc8c8329f7b96782ad4b1979b724c76"} Jan 29 12:14:13 crc kubenswrapper[4593]: I0129 12:14:13.615888 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zc4pg/must-gather-htdlp" podStartSLOduration=8.632334522 podStartE2EDuration="16.615864197s" podCreationTimestamp="2026-01-29 12:13:57 +0000 UTC" firstStartedPulling="2026-01-29 12:13:58.676059883 +0000 UTC m=+4504.549094084" lastFinishedPulling="2026-01-29 12:14:06.659589568 +0000 UTC m=+4512.532623759" observedRunningTime="2026-01-29 12:14:07.594173972 +0000 UTC m=+4513.467208173" watchObservedRunningTime="2026-01-29 12:14:13.615864197 +0000 UTC m=+4519.488898388" Jan 29 12:14:13 crc kubenswrapper[4593]: I0129 12:14:13.625170 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zc4pg/crc-debug-46zhj"] Jan 29 12:14:13 crc kubenswrapper[4593]: I0129 12:14:13.626358 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zc4pg/crc-debug-46zhj" Jan 29 12:14:13 crc kubenswrapper[4593]: I0129 12:14:13.653236 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npng5\" (UniqueName: \"kubernetes.io/projected/4b73d5b9-a18b-4213-836b-d326b2998b3b-kube-api-access-npng5\") pod \"crc-debug-46zhj\" (UID: \"4b73d5b9-a18b-4213-836b-d326b2998b3b\") " pod="openshift-must-gather-zc4pg/crc-debug-46zhj" Jan 29 12:14:13 crc kubenswrapper[4593]: I0129 12:14:13.653732 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b73d5b9-a18b-4213-836b-d326b2998b3b-host\") pod \"crc-debug-46zhj\" (UID: \"4b73d5b9-a18b-4213-836b-d326b2998b3b\") " pod="openshift-must-gather-zc4pg/crc-debug-46zhj" Jan 29 12:14:13 crc kubenswrapper[4593]: I0129 12:14:13.755140 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npng5\" (UniqueName: \"kubernetes.io/projected/4b73d5b9-a18b-4213-836b-d326b2998b3b-kube-api-access-npng5\") pod \"crc-debug-46zhj\" (UID: \"4b73d5b9-a18b-4213-836b-d326b2998b3b\") " pod="openshift-must-gather-zc4pg/crc-debug-46zhj" Jan 29 12:14:13 crc kubenswrapper[4593]: I0129 12:14:13.755299 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b73d5b9-a18b-4213-836b-d326b2998b3b-host\") pod \"crc-debug-46zhj\" (UID: \"4b73d5b9-a18b-4213-836b-d326b2998b3b\") " pod="openshift-must-gather-zc4pg/crc-debug-46zhj" Jan 29 12:14:13 crc kubenswrapper[4593]: I0129 12:14:13.755413 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b73d5b9-a18b-4213-836b-d326b2998b3b-host\") pod \"crc-debug-46zhj\" (UID: \"4b73d5b9-a18b-4213-836b-d326b2998b3b\") " pod="openshift-must-gather-zc4pg/crc-debug-46zhj" Jan 29 12:14:13 crc kubenswrapper[4593]: I0129 12:14:13.781310 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npng5\" (UniqueName: \"kubernetes.io/projected/4b73d5b9-a18b-4213-836b-d326b2998b3b-kube-api-access-npng5\") pod \"crc-debug-46zhj\" (UID: \"4b73d5b9-a18b-4213-836b-d326b2998b3b\") " pod="openshift-must-gather-zc4pg/crc-debug-46zhj" Jan 29 12:14:13 crc kubenswrapper[4593]: I0129 12:14:13.943112 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zc4pg/crc-debug-46zhj" Jan 29 12:14:14 crc kubenswrapper[4593]: I0129 12:14:14.653681 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zc4pg/crc-debug-46zhj" event={"ID":"4b73d5b9-a18b-4213-836b-d326b2998b3b","Type":"ContainerStarted","Data":"9490ccfec3ec0d0a7eb16cfabfbf39ebc9c56a9cfb6e795dd876b4c0791d8c44"} Jan 29 12:14:28 crc kubenswrapper[4593]: I0129 12:14:28.940802 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zc4pg/crc-debug-46zhj" event={"ID":"4b73d5b9-a18b-4213-836b-d326b2998b3b","Type":"ContainerStarted","Data":"71a0e35a9b97791cdb2e7a3a0e49f82c96b3918bca79faeaea9323664e2cf8c6"} Jan 29 12:14:28 crc kubenswrapper[4593]: I0129 12:14:28.967466 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zc4pg/crc-debug-46zhj" podStartSLOduration=2.214317328 podStartE2EDuration="15.967413722s" podCreationTimestamp="2026-01-29 12:14:13 +0000 UTC" firstStartedPulling="2026-01-29 12:14:14.006925169 +0000 UTC m=+4519.879959360" lastFinishedPulling="2026-01-29 12:14:27.760021563 +0000 UTC m=+4533.633055754" observedRunningTime="2026-01-29 12:14:28.956532598 +0000 UTC m=+4534.829566789" watchObservedRunningTime="2026-01-29 12:14:28.967413722 +0000 UTC m=+4534.840447913" Jan 29 12:14:33 crc kubenswrapper[4593]: I0129 12:14:33.952287 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:14:33 crc kubenswrapper[4593]: I0129 12:14:33.952981 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:15:00 crc kubenswrapper[4593]: I0129 12:15:00.180410 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6"] Jan 29 12:15:00 crc kubenswrapper[4593]: I0129 12:15:00.183317 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" Jan 29 12:15:00 crc kubenswrapper[4593]: I0129 12:15:00.186991 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 12:15:00 crc kubenswrapper[4593]: I0129 12:15:00.189461 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 12:15:00 crc kubenswrapper[4593]: I0129 12:15:00.207958 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6"] Jan 29 12:15:00 crc kubenswrapper[4593]: I0129 12:15:00.275810 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwsqv\" (UniqueName: \"kubernetes.io/projected/cdd89dc3-5db6-4bc0-88c1-472488589100-kube-api-access-pwsqv\") pod \"collect-profiles-29494815-ndsr6\" (UID: \"cdd89dc3-5db6-4bc0-88c1-472488589100\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" Jan 29 12:15:00 crc kubenswrapper[4593]: I0129 12:15:00.275992 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdd89dc3-5db6-4bc0-88c1-472488589100-config-volume\") pod \"collect-profiles-29494815-ndsr6\" (UID: \"cdd89dc3-5db6-4bc0-88c1-472488589100\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" Jan 29 12:15:00 crc kubenswrapper[4593]: I0129 12:15:00.276026 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cdd89dc3-5db6-4bc0-88c1-472488589100-secret-volume\") pod \"collect-profiles-29494815-ndsr6\" (UID: \"cdd89dc3-5db6-4bc0-88c1-472488589100\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" Jan 29 12:15:00 crc kubenswrapper[4593]: I0129 12:15:00.377648 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdd89dc3-5db6-4bc0-88c1-472488589100-config-volume\") pod \"collect-profiles-29494815-ndsr6\" (UID: \"cdd89dc3-5db6-4bc0-88c1-472488589100\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" Jan 29 12:15:00 crc kubenswrapper[4593]: I0129 12:15:00.377706 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cdd89dc3-5db6-4bc0-88c1-472488589100-secret-volume\") pod \"collect-profiles-29494815-ndsr6\" (UID: \"cdd89dc3-5db6-4bc0-88c1-472488589100\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" Jan 29 12:15:00 crc kubenswrapper[4593]: I0129 12:15:00.377803 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwsqv\" (UniqueName: \"kubernetes.io/projected/cdd89dc3-5db6-4bc0-88c1-472488589100-kube-api-access-pwsqv\") pod \"collect-profiles-29494815-ndsr6\" (UID: \"cdd89dc3-5db6-4bc0-88c1-472488589100\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" Jan 29 12:15:00 crc kubenswrapper[4593]: I0129 12:15:00.379003 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdd89dc3-5db6-4bc0-88c1-472488589100-config-volume\") pod \"collect-profiles-29494815-ndsr6\" (UID: \"cdd89dc3-5db6-4bc0-88c1-472488589100\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" Jan 29 12:15:00 crc kubenswrapper[4593]: I0129 12:15:00.397007 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cdd89dc3-5db6-4bc0-88c1-472488589100-secret-volume\") pod \"collect-profiles-29494815-ndsr6\" (UID: \"cdd89dc3-5db6-4bc0-88c1-472488589100\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" Jan 29 12:15:00 crc kubenswrapper[4593]: I0129 12:15:00.419481 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwsqv\" (UniqueName: \"kubernetes.io/projected/cdd89dc3-5db6-4bc0-88c1-472488589100-kube-api-access-pwsqv\") pod \"collect-profiles-29494815-ndsr6\" (UID: \"cdd89dc3-5db6-4bc0-88c1-472488589100\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" Jan 29 12:15:00 crc kubenswrapper[4593]: I0129 12:15:00.512027 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" Jan 29 12:15:01 crc kubenswrapper[4593]: I0129 12:15:01.069047 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6"] Jan 29 12:15:02 crc kubenswrapper[4593]: I0129 12:15:02.388590 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" event={"ID":"cdd89dc3-5db6-4bc0-88c1-472488589100","Type":"ContainerStarted","Data":"f3a2960ccf5dd7cb1b20ed12f992a709cf119e020342cf8773f91b5fa318e059"} Jan 29 12:15:02 crc kubenswrapper[4593]: I0129 12:15:02.389152 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" event={"ID":"cdd89dc3-5db6-4bc0-88c1-472488589100","Type":"ContainerStarted","Data":"9fe82d1ffb28043d4ade6eac624b53d781d115801dbf977a3a6388e0494c2202"} Jan 29 12:15:02 crc kubenswrapper[4593]: I0129 12:15:02.422706 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" podStartSLOduration=2.422669327 podStartE2EDuration="2.422669327s" podCreationTimestamp="2026-01-29 12:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:15:02.41395427 +0000 UTC m=+4568.286988501" watchObservedRunningTime="2026-01-29 12:15:02.422669327 +0000 UTC m=+4568.295703518" Jan 29 12:15:03 crc kubenswrapper[4593]: I0129 12:15:03.400345 4593 generic.go:334] "Generic (PLEG): container finished" podID="cdd89dc3-5db6-4bc0-88c1-472488589100" containerID="f3a2960ccf5dd7cb1b20ed12f992a709cf119e020342cf8773f91b5fa318e059" exitCode=0 Jan 29 12:15:03 crc kubenswrapper[4593]: I0129 12:15:03.400604 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" event={"ID":"cdd89dc3-5db6-4bc0-88c1-472488589100","Type":"ContainerDied","Data":"f3a2960ccf5dd7cb1b20ed12f992a709cf119e020342cf8773f91b5fa318e059"} Jan 29 12:15:03 crc kubenswrapper[4593]: I0129 12:15:03.947048 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:15:03 crc kubenswrapper[4593]: I0129 12:15:03.947158 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:15:03 crc kubenswrapper[4593]: I0129 12:15:03.947208 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 12:15:03 crc kubenswrapper[4593]: I0129 12:15:03.948004 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:15:03 crc kubenswrapper[4593]: I0129 12:15:03.948087 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" gracePeriod=600 Jan 29 12:15:04 crc kubenswrapper[4593]: E0129 12:15:04.086467 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:15:04 crc kubenswrapper[4593]: I0129 12:15:04.410338 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" exitCode=0 Jan 29 12:15:04 crc kubenswrapper[4593]: I0129 12:15:04.410406 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e"} Jan 29 12:15:04 crc kubenswrapper[4593]: I0129 12:15:04.410476 4593 scope.go:117] "RemoveContainer" containerID="e17e203ea610856274105cc5fc7a47b3a11ad9dc0a91cefedfbfe32379366f89" Jan 29 12:15:04 crc kubenswrapper[4593]: I0129 12:15:04.411193 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:15:04 crc kubenswrapper[4593]: E0129 12:15:04.411517 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:15:04 crc kubenswrapper[4593]: I0129 12:15:04.893514 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" Jan 29 12:15:04 crc kubenswrapper[4593]: I0129 12:15:04.998341 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cdd89dc3-5db6-4bc0-88c1-472488589100-secret-volume\") pod \"cdd89dc3-5db6-4bc0-88c1-472488589100\" (UID: \"cdd89dc3-5db6-4bc0-88c1-472488589100\") " Jan 29 12:15:04 crc kubenswrapper[4593]: I0129 12:15:04.998467 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdd89dc3-5db6-4bc0-88c1-472488589100-config-volume\") pod \"cdd89dc3-5db6-4bc0-88c1-472488589100\" (UID: \"cdd89dc3-5db6-4bc0-88c1-472488589100\") " Jan 29 12:15:04 crc kubenswrapper[4593]: I0129 12:15:04.998522 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwsqv\" (UniqueName: \"kubernetes.io/projected/cdd89dc3-5db6-4bc0-88c1-472488589100-kube-api-access-pwsqv\") pod \"cdd89dc3-5db6-4bc0-88c1-472488589100\" (UID: \"cdd89dc3-5db6-4bc0-88c1-472488589100\") " Jan 29 12:15:04 crc kubenswrapper[4593]: I0129 12:15:04.999267 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd89dc3-5db6-4bc0-88c1-472488589100-config-volume" (OuterVolumeSpecName: "config-volume") pod "cdd89dc3-5db6-4bc0-88c1-472488589100" (UID: "cdd89dc3-5db6-4bc0-88c1-472488589100"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:15:05 crc kubenswrapper[4593]: I0129 12:15:05.012264 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdd89dc3-5db6-4bc0-88c1-472488589100-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cdd89dc3-5db6-4bc0-88c1-472488589100" (UID: "cdd89dc3-5db6-4bc0-88c1-472488589100"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:15:05 crc kubenswrapper[4593]: I0129 12:15:05.019800 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdd89dc3-5db6-4bc0-88c1-472488589100-kube-api-access-pwsqv" (OuterVolumeSpecName: "kube-api-access-pwsqv") pod "cdd89dc3-5db6-4bc0-88c1-472488589100" (UID: "cdd89dc3-5db6-4bc0-88c1-472488589100"). InnerVolumeSpecName "kube-api-access-pwsqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:15:05 crc kubenswrapper[4593]: I0129 12:15:05.101183 4593 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdd89dc3-5db6-4bc0-88c1-472488589100-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:05 crc kubenswrapper[4593]: I0129 12:15:05.101231 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwsqv\" (UniqueName: \"kubernetes.io/projected/cdd89dc3-5db6-4bc0-88c1-472488589100-kube-api-access-pwsqv\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:05 crc kubenswrapper[4593]: I0129 12:15:05.101246 4593 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cdd89dc3-5db6-4bc0-88c1-472488589100-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:05 crc kubenswrapper[4593]: I0129 12:15:05.430437 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" event={"ID":"cdd89dc3-5db6-4bc0-88c1-472488589100","Type":"ContainerDied","Data":"9fe82d1ffb28043d4ade6eac624b53d781d115801dbf977a3a6388e0494c2202"} Jan 29 12:15:05 crc kubenswrapper[4593]: I0129 12:15:05.430856 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fe82d1ffb28043d4ade6eac624b53d781d115801dbf977a3a6388e0494c2202" Jan 29 12:15:05 crc kubenswrapper[4593]: I0129 12:15:05.430962 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" Jan 29 12:15:05 crc kubenswrapper[4593]: I0129 12:15:05.505698 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j"] Jan 29 12:15:05 crc kubenswrapper[4593]: I0129 12:15:05.519424 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j"] Jan 29 12:15:07 crc kubenswrapper[4593]: I0129 12:15:07.090894 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe3bb310-71b1-4d29-a302-e06181c04f5f" path="/var/lib/kubelet/pods/fe3bb310-71b1-4d29-a302-e06181c04f5f/volumes" Jan 29 12:15:16 crc kubenswrapper[4593]: I0129 12:15:16.074611 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:15:16 crc kubenswrapper[4593]: E0129 12:15:16.075481 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:15:16 crc kubenswrapper[4593]: I0129 12:15:16.984220 4593 scope.go:117] "RemoveContainer" containerID="f5dc8ed87db86aba663f3bdc857a868a9a85bafb38e9e0269844cbb77f36242a" Jan 29 12:15:26 crc kubenswrapper[4593]: I0129 12:15:26.665521 4593 generic.go:334] "Generic (PLEG): container finished" podID="4b73d5b9-a18b-4213-836b-d326b2998b3b" containerID="71a0e35a9b97791cdb2e7a3a0e49f82c96b3918bca79faeaea9323664e2cf8c6" exitCode=0 Jan 29 12:15:26 crc kubenswrapper[4593]: I0129 12:15:26.665611 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zc4pg/crc-debug-46zhj" event={"ID":"4b73d5b9-a18b-4213-836b-d326b2998b3b","Type":"ContainerDied","Data":"71a0e35a9b97791cdb2e7a3a0e49f82c96b3918bca79faeaea9323664e2cf8c6"} Jan 29 12:15:27 crc kubenswrapper[4593]: I0129 12:15:27.077200 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:15:27 crc kubenswrapper[4593]: E0129 12:15:27.078952 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:15:27 crc kubenswrapper[4593]: I0129 12:15:27.800200 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zc4pg/crc-debug-46zhj" Jan 29 12:15:27 crc kubenswrapper[4593]: I0129 12:15:27.838995 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zc4pg/crc-debug-46zhj"] Jan 29 12:15:27 crc kubenswrapper[4593]: I0129 12:15:27.849613 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zc4pg/crc-debug-46zhj"] Jan 29 12:15:27 crc kubenswrapper[4593]: I0129 12:15:27.864565 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npng5\" (UniqueName: \"kubernetes.io/projected/4b73d5b9-a18b-4213-836b-d326b2998b3b-kube-api-access-npng5\") pod \"4b73d5b9-a18b-4213-836b-d326b2998b3b\" (UID: \"4b73d5b9-a18b-4213-836b-d326b2998b3b\") " Jan 29 12:15:27 crc kubenswrapper[4593]: I0129 12:15:27.865063 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b73d5b9-a18b-4213-836b-d326b2998b3b-host\") pod \"4b73d5b9-a18b-4213-836b-d326b2998b3b\" (UID: \"4b73d5b9-a18b-4213-836b-d326b2998b3b\") " Jan 29 12:15:27 crc kubenswrapper[4593]: I0129 12:15:27.865228 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b73d5b9-a18b-4213-836b-d326b2998b3b-host" (OuterVolumeSpecName: "host") pod "4b73d5b9-a18b-4213-836b-d326b2998b3b" (UID: "4b73d5b9-a18b-4213-836b-d326b2998b3b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:15:27 crc kubenswrapper[4593]: I0129 12:15:27.878611 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b73d5b9-a18b-4213-836b-d326b2998b3b-kube-api-access-npng5" (OuterVolumeSpecName: "kube-api-access-npng5") pod "4b73d5b9-a18b-4213-836b-d326b2998b3b" (UID: "4b73d5b9-a18b-4213-836b-d326b2998b3b"). InnerVolumeSpecName "kube-api-access-npng5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:15:27 crc kubenswrapper[4593]: I0129 12:15:27.968153 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npng5\" (UniqueName: \"kubernetes.io/projected/4b73d5b9-a18b-4213-836b-d326b2998b3b-kube-api-access-npng5\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:27 crc kubenswrapper[4593]: I0129 12:15:27.968200 4593 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b73d5b9-a18b-4213-836b-d326b2998b3b-host\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:28 crc kubenswrapper[4593]: I0129 12:15:28.686392 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9490ccfec3ec0d0a7eb16cfabfbf39ebc9c56a9cfb6e795dd876b4c0791d8c44" Jan 29 12:15:28 crc kubenswrapper[4593]: I0129 12:15:28.686505 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zc4pg/crc-debug-46zhj" Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.068274 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zc4pg/crc-debug-jj248"] Jan 29 12:15:29 crc kubenswrapper[4593]: E0129 12:15:29.068981 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd89dc3-5db6-4bc0-88c1-472488589100" containerName="collect-profiles" Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.069005 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd89dc3-5db6-4bc0-88c1-472488589100" containerName="collect-profiles" Jan 29 12:15:29 crc kubenswrapper[4593]: E0129 12:15:29.069029 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b73d5b9-a18b-4213-836b-d326b2998b3b" containerName="container-00" Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.069035 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b73d5b9-a18b-4213-836b-d326b2998b3b" containerName="container-00" Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.069269 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdd89dc3-5db6-4bc0-88c1-472488589100" containerName="collect-profiles" Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.069289 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b73d5b9-a18b-4213-836b-d326b2998b3b" containerName="container-00" Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.069952 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zc4pg/crc-debug-jj248" Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.087155 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3aebf42b-1daf-48f3-bf18-8ee07cd74ee2-host\") pod \"crc-debug-jj248\" (UID: \"3aebf42b-1daf-48f3-bf18-8ee07cd74ee2\") " pod="openshift-must-gather-zc4pg/crc-debug-jj248" Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.087292 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlcwz\" (UniqueName: \"kubernetes.io/projected/3aebf42b-1daf-48f3-bf18-8ee07cd74ee2-kube-api-access-nlcwz\") pod \"crc-debug-jj248\" (UID: \"3aebf42b-1daf-48f3-bf18-8ee07cd74ee2\") " pod="openshift-must-gather-zc4pg/crc-debug-jj248" Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.089399 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b73d5b9-a18b-4213-836b-d326b2998b3b" path="/var/lib/kubelet/pods/4b73d5b9-a18b-4213-836b-d326b2998b3b/volumes" Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.188835 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlcwz\" (UniqueName: \"kubernetes.io/projected/3aebf42b-1daf-48f3-bf18-8ee07cd74ee2-kube-api-access-nlcwz\") pod \"crc-debug-jj248\" (UID: \"3aebf42b-1daf-48f3-bf18-8ee07cd74ee2\") " pod="openshift-must-gather-zc4pg/crc-debug-jj248" Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.189006 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3aebf42b-1daf-48f3-bf18-8ee07cd74ee2-host\") pod \"crc-debug-jj248\" (UID: \"3aebf42b-1daf-48f3-bf18-8ee07cd74ee2\") " pod="openshift-must-gather-zc4pg/crc-debug-jj248" Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.189130 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3aebf42b-1daf-48f3-bf18-8ee07cd74ee2-host\") pod \"crc-debug-jj248\" (UID: \"3aebf42b-1daf-48f3-bf18-8ee07cd74ee2\") " pod="openshift-must-gather-zc4pg/crc-debug-jj248" Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.207457 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlcwz\" (UniqueName: \"kubernetes.io/projected/3aebf42b-1daf-48f3-bf18-8ee07cd74ee2-kube-api-access-nlcwz\") pod \"crc-debug-jj248\" (UID: \"3aebf42b-1daf-48f3-bf18-8ee07cd74ee2\") " pod="openshift-must-gather-zc4pg/crc-debug-jj248" Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.389140 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zc4pg/crc-debug-jj248" Jan 29 12:15:29 crc kubenswrapper[4593]: W0129 12:15:29.444432 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3aebf42b_1daf_48f3_bf18_8ee07cd74ee2.slice/crio-ea4c5704849efb98684053fc3de8c53fa835ab1abd79c597ee5214e58c54d06c WatchSource:0}: Error finding container ea4c5704849efb98684053fc3de8c53fa835ab1abd79c597ee5214e58c54d06c: Status 404 returned error can't find the container with id ea4c5704849efb98684053fc3de8c53fa835ab1abd79c597ee5214e58c54d06c Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.696522 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zc4pg/crc-debug-jj248" event={"ID":"3aebf42b-1daf-48f3-bf18-8ee07cd74ee2","Type":"ContainerStarted","Data":"54ccd1935e3e2e3e59738afad3c9d5c99134092f1b5fc8efa7667569d5fe3894"} Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.696869 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zc4pg/crc-debug-jj248" event={"ID":"3aebf42b-1daf-48f3-bf18-8ee07cd74ee2","Type":"ContainerStarted","Data":"ea4c5704849efb98684053fc3de8c53fa835ab1abd79c597ee5214e58c54d06c"} Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.715301 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zc4pg/crc-debug-jj248" podStartSLOduration=0.715265736 podStartE2EDuration="715.265736ms" podCreationTimestamp="2026-01-29 12:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:15:29.706257082 +0000 UTC m=+4595.579291273" watchObservedRunningTime="2026-01-29 12:15:29.715265736 +0000 UTC m=+4595.588299917" Jan 29 12:15:30 crc kubenswrapper[4593]: I0129 12:15:30.729088 4593 generic.go:334] "Generic (PLEG): container finished" podID="3aebf42b-1daf-48f3-bf18-8ee07cd74ee2" containerID="54ccd1935e3e2e3e59738afad3c9d5c99134092f1b5fc8efa7667569d5fe3894" exitCode=0 Jan 29 12:15:30 crc kubenswrapper[4593]: I0129 12:15:30.729544 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zc4pg/crc-debug-jj248" event={"ID":"3aebf42b-1daf-48f3-bf18-8ee07cd74ee2","Type":"ContainerDied","Data":"54ccd1935e3e2e3e59738afad3c9d5c99134092f1b5fc8efa7667569d5fe3894"} Jan 29 12:15:31 crc kubenswrapper[4593]: I0129 12:15:31.838858 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zc4pg/crc-debug-jj248" Jan 29 12:15:31 crc kubenswrapper[4593]: I0129 12:15:31.907764 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zc4pg/crc-debug-jj248"] Jan 29 12:15:31 crc kubenswrapper[4593]: I0129 12:15:31.917389 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zc4pg/crc-debug-jj248"] Jan 29 12:15:32 crc kubenswrapper[4593]: I0129 12:15:32.038871 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlcwz\" (UniqueName: \"kubernetes.io/projected/3aebf42b-1daf-48f3-bf18-8ee07cd74ee2-kube-api-access-nlcwz\") pod \"3aebf42b-1daf-48f3-bf18-8ee07cd74ee2\" (UID: \"3aebf42b-1daf-48f3-bf18-8ee07cd74ee2\") " Jan 29 12:15:32 crc kubenswrapper[4593]: I0129 12:15:32.039765 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3aebf42b-1daf-48f3-bf18-8ee07cd74ee2-host\") pod \"3aebf42b-1daf-48f3-bf18-8ee07cd74ee2\" (UID: \"3aebf42b-1daf-48f3-bf18-8ee07cd74ee2\") " Jan 29 12:15:32 crc kubenswrapper[4593]: I0129 12:15:32.039903 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3aebf42b-1daf-48f3-bf18-8ee07cd74ee2-host" (OuterVolumeSpecName: "host") pod "3aebf42b-1daf-48f3-bf18-8ee07cd74ee2" (UID: "3aebf42b-1daf-48f3-bf18-8ee07cd74ee2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:15:32 crc kubenswrapper[4593]: I0129 12:15:32.040333 4593 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3aebf42b-1daf-48f3-bf18-8ee07cd74ee2-host\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:32 crc kubenswrapper[4593]: I0129 12:15:32.044613 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aebf42b-1daf-48f3-bf18-8ee07cd74ee2-kube-api-access-nlcwz" (OuterVolumeSpecName: "kube-api-access-nlcwz") pod "3aebf42b-1daf-48f3-bf18-8ee07cd74ee2" (UID: "3aebf42b-1daf-48f3-bf18-8ee07cd74ee2"). InnerVolumeSpecName "kube-api-access-nlcwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:15:32 crc kubenswrapper[4593]: I0129 12:15:32.142528 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlcwz\" (UniqueName: \"kubernetes.io/projected/3aebf42b-1daf-48f3-bf18-8ee07cd74ee2-kube-api-access-nlcwz\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:32 crc kubenswrapper[4593]: I0129 12:15:32.749940 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea4c5704849efb98684053fc3de8c53fa835ab1abd79c597ee5214e58c54d06c" Jan 29 12:15:32 crc kubenswrapper[4593]: I0129 12:15:32.749994 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zc4pg/crc-debug-jj248" Jan 29 12:15:33 crc kubenswrapper[4593]: I0129 12:15:33.085725 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aebf42b-1daf-48f3-bf18-8ee07cd74ee2" path="/var/lib/kubelet/pods/3aebf42b-1daf-48f3-bf18-8ee07cd74ee2/volumes" Jan 29 12:15:33 crc kubenswrapper[4593]: I0129 12:15:33.088185 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zc4pg/crc-debug-zxrz4"] Jan 29 12:15:33 crc kubenswrapper[4593]: E0129 12:15:33.088704 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aebf42b-1daf-48f3-bf18-8ee07cd74ee2" containerName="container-00" Jan 29 12:15:33 crc kubenswrapper[4593]: I0129 12:15:33.088728 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aebf42b-1daf-48f3-bf18-8ee07cd74ee2" containerName="container-00" Jan 29 12:15:33 crc kubenswrapper[4593]: I0129 12:15:33.088981 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aebf42b-1daf-48f3-bf18-8ee07cd74ee2" containerName="container-00" Jan 29 12:15:33 crc kubenswrapper[4593]: I0129 12:15:33.090240 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zc4pg/crc-debug-zxrz4" Jan 29 12:15:33 crc kubenswrapper[4593]: I0129 12:15:33.261992 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c41e742a-4985-4b87-8a5b-6a7586971569-host\") pod \"crc-debug-zxrz4\" (UID: \"c41e742a-4985-4b87-8a5b-6a7586971569\") " pod="openshift-must-gather-zc4pg/crc-debug-zxrz4" Jan 29 12:15:33 crc kubenswrapper[4593]: I0129 12:15:33.262330 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch9th\" (UniqueName: \"kubernetes.io/projected/c41e742a-4985-4b87-8a5b-6a7586971569-kube-api-access-ch9th\") pod \"crc-debug-zxrz4\" (UID: \"c41e742a-4985-4b87-8a5b-6a7586971569\") " pod="openshift-must-gather-zc4pg/crc-debug-zxrz4" Jan 29 12:15:33 crc kubenswrapper[4593]: I0129 12:15:33.363619 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch9th\" (UniqueName: \"kubernetes.io/projected/c41e742a-4985-4b87-8a5b-6a7586971569-kube-api-access-ch9th\") pod \"crc-debug-zxrz4\" (UID: \"c41e742a-4985-4b87-8a5b-6a7586971569\") " pod="openshift-must-gather-zc4pg/crc-debug-zxrz4" Jan 29 12:15:33 crc kubenswrapper[4593]: I0129 12:15:33.363718 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c41e742a-4985-4b87-8a5b-6a7586971569-host\") pod \"crc-debug-zxrz4\" (UID: \"c41e742a-4985-4b87-8a5b-6a7586971569\") " pod="openshift-must-gather-zc4pg/crc-debug-zxrz4" Jan 29 12:15:33 crc kubenswrapper[4593]: I0129 12:15:33.363875 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c41e742a-4985-4b87-8a5b-6a7586971569-host\") pod \"crc-debug-zxrz4\" (UID: \"c41e742a-4985-4b87-8a5b-6a7586971569\") " pod="openshift-must-gather-zc4pg/crc-debug-zxrz4" Jan 29 12:15:33 crc kubenswrapper[4593]: I0129 12:15:33.467374 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch9th\" (UniqueName: \"kubernetes.io/projected/c41e742a-4985-4b87-8a5b-6a7586971569-kube-api-access-ch9th\") pod \"crc-debug-zxrz4\" (UID: \"c41e742a-4985-4b87-8a5b-6a7586971569\") " pod="openshift-must-gather-zc4pg/crc-debug-zxrz4" Jan 29 12:15:33 crc kubenswrapper[4593]: I0129 12:15:33.708461 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zc4pg/crc-debug-zxrz4" Jan 29 12:15:33 crc kubenswrapper[4593]: W0129 12:15:33.750311 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc41e742a_4985_4b87_8a5b_6a7586971569.slice/crio-fe297e3e8b6a77806b4c620120f05f1affe9bb6665c7269005e5ecdb51b09f39 WatchSource:0}: Error finding container fe297e3e8b6a77806b4c620120f05f1affe9bb6665c7269005e5ecdb51b09f39: Status 404 returned error can't find the container with id fe297e3e8b6a77806b4c620120f05f1affe9bb6665c7269005e5ecdb51b09f39 Jan 29 12:15:33 crc kubenswrapper[4593]: I0129 12:15:33.760388 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zc4pg/crc-debug-zxrz4" event={"ID":"c41e742a-4985-4b87-8a5b-6a7586971569","Type":"ContainerStarted","Data":"fe297e3e8b6a77806b4c620120f05f1affe9bb6665c7269005e5ecdb51b09f39"} Jan 29 12:15:34 crc kubenswrapper[4593]: I0129 12:15:34.789142 4593 generic.go:334] "Generic (PLEG): container finished" podID="c41e742a-4985-4b87-8a5b-6a7586971569" containerID="612c74d7772bc16c58093a75fde2a808f49eb1d7c158d2965d447d9b9b7cb962" exitCode=0 Jan 29 12:15:34 crc kubenswrapper[4593]: I0129 12:15:34.789771 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zc4pg/crc-debug-zxrz4" event={"ID":"c41e742a-4985-4b87-8a5b-6a7586971569","Type":"ContainerDied","Data":"612c74d7772bc16c58093a75fde2a808f49eb1d7c158d2965d447d9b9b7cb962"} Jan 29 12:15:34 crc kubenswrapper[4593]: I0129 12:15:34.852214 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zc4pg/crc-debug-zxrz4"] Jan 29 12:15:34 crc kubenswrapper[4593]: I0129 12:15:34.864047 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zc4pg/crc-debug-zxrz4"] Jan 29 12:15:36 crc kubenswrapper[4593]: I0129 12:15:36.106545 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zc4pg/crc-debug-zxrz4" Jan 29 12:15:36 crc kubenswrapper[4593]: I0129 12:15:36.132009 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch9th\" (UniqueName: \"kubernetes.io/projected/c41e742a-4985-4b87-8a5b-6a7586971569-kube-api-access-ch9th\") pod \"c41e742a-4985-4b87-8a5b-6a7586971569\" (UID: \"c41e742a-4985-4b87-8a5b-6a7586971569\") " Jan 29 12:15:36 crc kubenswrapper[4593]: I0129 12:15:36.132126 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c41e742a-4985-4b87-8a5b-6a7586971569-host\") pod \"c41e742a-4985-4b87-8a5b-6a7586971569\" (UID: \"c41e742a-4985-4b87-8a5b-6a7586971569\") " Jan 29 12:15:36 crc kubenswrapper[4593]: I0129 12:15:36.132271 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c41e742a-4985-4b87-8a5b-6a7586971569-host" (OuterVolumeSpecName: "host") pod "c41e742a-4985-4b87-8a5b-6a7586971569" (UID: "c41e742a-4985-4b87-8a5b-6a7586971569"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:15:36 crc kubenswrapper[4593]: I0129 12:15:36.133099 4593 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c41e742a-4985-4b87-8a5b-6a7586971569-host\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:36 crc kubenswrapper[4593]: I0129 12:15:36.139153 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c41e742a-4985-4b87-8a5b-6a7586971569-kube-api-access-ch9th" (OuterVolumeSpecName: "kube-api-access-ch9th") pod "c41e742a-4985-4b87-8a5b-6a7586971569" (UID: "c41e742a-4985-4b87-8a5b-6a7586971569"). InnerVolumeSpecName "kube-api-access-ch9th". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:15:36 crc kubenswrapper[4593]: I0129 12:15:36.234743 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch9th\" (UniqueName: \"kubernetes.io/projected/c41e742a-4985-4b87-8a5b-6a7586971569-kube-api-access-ch9th\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:36 crc kubenswrapper[4593]: I0129 12:15:36.825626 4593 scope.go:117] "RemoveContainer" containerID="612c74d7772bc16c58093a75fde2a808f49eb1d7c158d2965d447d9b9b7cb962" Jan 29 12:15:36 crc kubenswrapper[4593]: I0129 12:15:36.825760 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zc4pg/crc-debug-zxrz4" Jan 29 12:15:37 crc kubenswrapper[4593]: I0129 12:15:37.086827 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c41e742a-4985-4b87-8a5b-6a7586971569" path="/var/lib/kubelet/pods/c41e742a-4985-4b87-8a5b-6a7586971569/volumes" Jan 29 12:15:38 crc kubenswrapper[4593]: I0129 12:15:38.074972 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:15:38 crc kubenswrapper[4593]: E0129 12:15:38.075587 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:15:53 crc kubenswrapper[4593]: I0129 12:15:53.075308 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:15:53 crc kubenswrapper[4593]: E0129 12:15:53.076478 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:16:02 crc kubenswrapper[4593]: I0129 12:16:02.730336 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-59844fc4b6-zctck_07d138d8-a5fa-4b77-80e5-924dba8de4c0/barbican-api/0.log" Jan 29 12:16:02 crc kubenswrapper[4593]: I0129 12:16:02.954178 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-59844fc4b6-zctck_07d138d8-a5fa-4b77-80e5-924dba8de4c0/barbican-api-log/0.log" Jan 29 12:16:03 crc kubenswrapper[4593]: I0129 12:16:03.001678 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6cf8bfd486-7dlhx_5f3c398f-928a-4f7e-9e76-6978b8a3673e/barbican-keystone-listener/0.log" Jan 29 12:16:03 crc kubenswrapper[4593]: I0129 12:16:03.154520 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6cf8bfd486-7dlhx_5f3c398f-928a-4f7e-9e76-6978b8a3673e/barbican-keystone-listener-log/0.log" Jan 29 12:16:03 crc kubenswrapper[4593]: I0129 12:16:03.221263 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5947965cdc-wl48v_564d3b50-7cec-4913-bac8-64af532aa32f/barbican-worker/0.log" Jan 29 12:16:03 crc kubenswrapper[4593]: I0129 12:16:03.352487 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5947965cdc-wl48v_564d3b50-7cec-4913-bac8-64af532aa32f/barbican-worker-log/0.log" Jan 29 12:16:03 crc kubenswrapper[4593]: I0129 12:16:03.502736 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz_e4241343-d4f5-4690-972e-55f054a93f30/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:16:03 crc kubenswrapper[4593]: I0129 12:16:03.698585 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8581bb16-8d35-4521-8886-3c71554a3a4d/ceilometer-central-agent/0.log" Jan 29 12:16:03 crc kubenswrapper[4593]: I0129 12:16:03.734641 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8581bb16-8d35-4521-8886-3c71554a3a4d/proxy-httpd/0.log" Jan 29 12:16:03 crc kubenswrapper[4593]: I0129 12:16:03.750684 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8581bb16-8d35-4521-8886-3c71554a3a4d/ceilometer-notification-agent/0.log" Jan 29 12:16:03 crc kubenswrapper[4593]: I0129 12:16:03.807342 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8581bb16-8d35-4521-8886-3c71554a3a4d/sg-core/0.log" Jan 29 12:16:03 crc kubenswrapper[4593]: I0129 12:16:03.981076 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c7ea14af-5b7c-44d6-a34c-1a344bfc96ef/cinder-api-log/0.log" Jan 29 12:16:04 crc kubenswrapper[4593]: I0129 12:16:04.062838 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c7ea14af-5b7c-44d6-a34c-1a344bfc96ef/cinder-api/0.log" Jan 29 12:16:04 crc kubenswrapper[4593]: I0129 12:16:04.238995 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_5516e5e9-a6e4-4877-bd34-af4128cc7e33/probe/0.log" Jan 29 12:16:04 crc kubenswrapper[4593]: I0129 12:16:04.365334 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_5516e5e9-a6e4-4877-bd34-af4128cc7e33/cinder-scheduler/0.log" Jan 29 12:16:04 crc kubenswrapper[4593]: I0129 12:16:04.434062 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-27mbg_80d7dd41-691a-4411-97c2-91245d43b8ea/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:16:04 crc kubenswrapper[4593]: I0129 12:16:04.670818 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5_83fa3cd4-ce6a-44bb-b652-c783504941f9/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:16:04 crc kubenswrapper[4593]: I0129 12:16:04.733764 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-67cb876dc9-mqmln_07012c75-f2fe-400a-b511-d0cc18a1ca9c/init/0.log" Jan 29 12:16:05 crc kubenswrapper[4593]: I0129 12:16:05.054162 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-67cb876dc9-mqmln_07012c75-f2fe-400a-b511-d0cc18a1ca9c/init/0.log" Jan 29 12:16:05 crc kubenswrapper[4593]: I0129 12:16:05.120236 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-67cb876dc9-mqmln_07012c75-f2fe-400a-b511-d0cc18a1ca9c/dnsmasq-dns/0.log" Jan 29 12:16:05 crc kubenswrapper[4593]: I0129 12:16:05.175619 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-g462j_fee0ef55-8edb-456c-9344-98a3b34d3aab/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:16:05 crc kubenswrapper[4593]: I0129 12:16:05.417421 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_43872652-3bb2-4a5c-9b13-cb25d625cd01/glance-httpd/0.log" Jan 29 12:16:05 crc kubenswrapper[4593]: I0129 12:16:05.433935 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_43872652-3bb2-4a5c-9b13-cb25d625cd01/glance-log/0.log" Jan 29 12:16:05 crc kubenswrapper[4593]: I0129 12:16:05.662147 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c4f0192e-509d-46a4-9a2a-c82106019381/glance-httpd/0.log" Jan 29 12:16:05 crc kubenswrapper[4593]: I0129 12:16:05.719947 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c4f0192e-509d-46a4-9a2a-c82106019381/glance-log/0.log" Jan 29 12:16:05 crc kubenswrapper[4593]: I0129 12:16:05.882493 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5bdffb4784-5zp8q_be4a01cd-2eb7-48e8-8a7e-eb02f8851188/horizon/2.log" Jan 29 12:16:06 crc kubenswrapper[4593]: I0129 12:16:06.017337 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5bdffb4784-5zp8q_be4a01cd-2eb7-48e8-8a7e-eb02f8851188/horizon/1.log" Jan 29 12:16:06 crc kubenswrapper[4593]: I0129 12:16:06.210826 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-x2n68_0418390b-7622-490c-ad95-ec5eac075440/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:16:06 crc kubenswrapper[4593]: I0129 12:16:06.385438 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-p4f88_62d982c9-eb7a-4d9d-9cdd-2248c63b06fb/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:16:06 crc kubenswrapper[4593]: I0129 12:16:06.420874 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5bdffb4784-5zp8q_be4a01cd-2eb7-48e8-8a7e-eb02f8851188/horizon-log/0.log" Jan 29 12:16:06 crc kubenswrapper[4593]: I0129 12:16:06.808884 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29494801-8jgxn_f7d47080-9737-4b86-9e40-a6c6bf7f1709/keystone-cron/0.log" Jan 29 12:16:07 crc kubenswrapper[4593]: I0129 12:16:07.075090 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:16:07 crc kubenswrapper[4593]: E0129 12:16:07.075401 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:16:07 crc kubenswrapper[4593]: I0129 12:16:07.205545 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_6d0c0ba2-e8ed-4361-8aff-e71714a1617c/kube-state-metrics/0.log" Jan 29 12:16:07 crc kubenswrapper[4593]: I0129 12:16:07.317104 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7f96568f6f-lfzv9_e2e767a2-2e4c-4a41-995f-1f0ca9248d1a/keystone-api/0.log" Jan 29 12:16:07 crc kubenswrapper[4593]: I0129 12:16:07.361996 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-jt98j_1f7fe168-4498-4002-9233-d6c2d9f115fb/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:16:08 crc kubenswrapper[4593]: I0129 12:16:08.105612 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct_4c7cff3f-040a-4499-825c-3cccd015326a/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:16:08 crc kubenswrapper[4593]: I0129 12:16:08.306526 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-84867bd7b9-4vrb9_174d0d16-4c6e-403a-bf10-0a69b4e98fb1/neutron-httpd/0.log" Jan 29 12:16:08 crc kubenswrapper[4593]: I0129 12:16:08.336442 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-84867bd7b9-4vrb9_174d0d16-4c6e-403a-bf10-0a69b4e98fb1/neutron-api/0.log" Jan 29 12:16:09 crc kubenswrapper[4593]: I0129 12:16:09.020850 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f/nova-cell0-conductor-conductor/0.log" Jan 29 12:16:09 crc kubenswrapper[4593]: I0129 12:16:09.327589 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_bee10dce-c68f-47f4-84e0-623f276964d8/nova-cell1-conductor-conductor/0.log" Jan 29 12:16:09 crc kubenswrapper[4593]: I0129 12:16:09.701994 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_0b25e9a9-4f12-4b7f-9001-74b6c3feb118/nova-cell1-novncproxy-novncproxy/0.log" Jan 29 12:16:09 crc kubenswrapper[4593]: I0129 12:16:09.946620 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_0d08c570-1374-4c5a-832e-c973d7a39796/nova-api-log/0.log" Jan 29 12:16:09 crc kubenswrapper[4593]: I0129 12:16:09.986535 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-rtfdg_f45f3aca-42e1-4105-b843-f5288550ce8c/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:16:10 crc kubenswrapper[4593]: I0129 12:16:10.141611 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_0d08c570-1374-4c5a-832e-c973d7a39796/nova-api-api/0.log" Jan 29 12:16:10 crc kubenswrapper[4593]: I0129 12:16:10.164307 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_dc6f5a6c-3bf0-4f78-89f3-1e2683a37958/memcached/0.log" Jan 29 12:16:10 crc kubenswrapper[4593]: I0129 12:16:10.286670 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_649faf5c-e6bb-4e3d-8cb5-28c57f100008/nova-metadata-log/0.log" Jan 29 12:16:10 crc kubenswrapper[4593]: I0129 12:16:10.651400 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c1755998-9149-49be-b10f-c4fe029728bc/mysql-bootstrap/0.log" Jan 29 12:16:10 crc kubenswrapper[4593]: I0129 12:16:10.838399 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c1755998-9149-49be-b10f-c4fe029728bc/mysql-bootstrap/0.log" Jan 29 12:16:10 crc kubenswrapper[4593]: I0129 12:16:10.858098 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_4eff0b9f-e2c4-4ae0-9b42-585f9141d740/nova-scheduler-scheduler/0.log" Jan 29 12:16:10 crc kubenswrapper[4593]: I0129 12:16:10.952936 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c1755998-9149-49be-b10f-c4fe029728bc/galera/0.log" Jan 29 12:16:11 crc kubenswrapper[4593]: I0129 12:16:11.122259 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6674f537-f800-4b05-912c-b2671e504c17/mysql-bootstrap/0.log" Jan 29 12:16:11 crc kubenswrapper[4593]: I0129 12:16:11.379994 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_220bdfcb-98c4-4c78-8d95-ea64edfaf1ab/openstackclient/0.log" Jan 29 12:16:11 crc kubenswrapper[4593]: I0129 12:16:11.410335 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6674f537-f800-4b05-912c-b2671e504c17/mysql-bootstrap/0.log" Jan 29 12:16:11 crc kubenswrapper[4593]: I0129 12:16:11.469991 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6674f537-f800-4b05-912c-b2671e504c17/galera/0.log" Jan 29 12:16:11 crc kubenswrapper[4593]: I0129 12:16:11.519552 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_649faf5c-e6bb-4e3d-8cb5-28c57f100008/nova-metadata-metadata/0.log" Jan 29 12:16:11 crc kubenswrapper[4593]: I0129 12:16:11.640335 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-cc9qq_df5842a4-132b-4c71-a970-efe4f00a3827/ovn-controller/0.log" Jan 29 12:16:11 crc kubenswrapper[4593]: I0129 12:16:11.714783 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-g6lk4_9299d646-8191-4da6-a2d1-d5a8c6492e91/openstack-network-exporter/0.log" Jan 29 12:16:11 crc kubenswrapper[4593]: I0129 12:16:11.882492 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-x49lj_22811af4-f063-480b-81b2-6c09b6526fea/ovsdb-server-init/0.log" Jan 29 12:16:12 crc kubenswrapper[4593]: I0129 12:16:12.047656 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-x49lj_22811af4-f063-480b-81b2-6c09b6526fea/ovsdb-server/0.log" Jan 29 12:16:12 crc kubenswrapper[4593]: I0129 12:16:12.065502 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-x49lj_22811af4-f063-480b-81b2-6c09b6526fea/ovs-vswitchd/0.log" Jan 29 12:16:12 crc kubenswrapper[4593]: I0129 12:16:12.099288 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-x49lj_22811af4-f063-480b-81b2-6c09b6526fea/ovsdb-server-init/0.log" Jan 29 12:16:12 crc kubenswrapper[4593]: I0129 12:16:12.140574 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-ftxjl_80db2d7c-94e6-418b-a0b4-2b4064356e4b/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:16:12 crc kubenswrapper[4593]: I0129 12:16:12.322019 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5320cc21-470d-450c-afa0-c5926e3243c6/openstack-network-exporter/0.log" Jan 29 12:16:12 crc kubenswrapper[4593]: I0129 12:16:12.378847 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5320cc21-470d-450c-afa0-c5926e3243c6/ovn-northd/0.log" Jan 29 12:16:12 crc kubenswrapper[4593]: I0129 12:16:12.407987 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_fd9a4c00-318d-4bd1-85cb-40971234c3cd/openstack-network-exporter/0.log" Jan 29 12:16:12 crc kubenswrapper[4593]: I0129 12:16:12.581155 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9/openstack-network-exporter/0.log" Jan 29 12:16:12 crc kubenswrapper[4593]: I0129 12:16:12.581841 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_fd9a4c00-318d-4bd1-85cb-40971234c3cd/ovsdbserver-nb/0.log" Jan 29 12:16:12 crc kubenswrapper[4593]: I0129 12:16:12.709611 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9/ovsdbserver-sb/0.log" Jan 29 12:16:12 crc kubenswrapper[4593]: I0129 12:16:12.931807 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-869645f564-n6fhc_ae8bb4fd-b1d8-4a6a-ac95-9935c4458747/placement-api/0.log" Jan 29 12:16:13 crc kubenswrapper[4593]: I0129 12:16:13.031079 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-869645f564-n6fhc_ae8bb4fd-b1d8-4a6a-ac95-9935c4458747/placement-log/0.log" Jan 29 12:16:13 crc kubenswrapper[4593]: I0129 12:16:13.357466 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_66e64ba6-3c75-4430-9f03-0fe9dbb37459/setup-container/0.log" Jan 29 12:16:13 crc kubenswrapper[4593]: I0129 12:16:13.544399 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_66e64ba6-3c75-4430-9f03-0fe9dbb37459/rabbitmq/0.log" Jan 29 12:16:13 crc kubenswrapper[4593]: I0129 12:16:13.569413 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_66e64ba6-3c75-4430-9f03-0fe9dbb37459/setup-container/0.log" Jan 29 12:16:13 crc kubenswrapper[4593]: I0129 12:16:13.621071 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_63184534-fd04-4ef9-9c56-de6c30745ec4/setup-container/0.log" Jan 29 12:16:13 crc kubenswrapper[4593]: I0129 12:16:13.772195 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_63184534-fd04-4ef9-9c56-de6c30745ec4/setup-container/0.log" Jan 29 12:16:13 crc kubenswrapper[4593]: I0129 12:16:13.822714 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_63184534-fd04-4ef9-9c56-de6c30745ec4/rabbitmq/0.log" Jan 29 12:16:13 crc kubenswrapper[4593]: I0129 12:16:13.910057 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-jps44_9a263e61-6654-4030-bd96-c1baa9314111/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:16:14 crc kubenswrapper[4593]: I0129 12:16:14.051061 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-7tzj5_ce80c16f-5109-46b9-9438-4f05a4132175/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:16:14 crc kubenswrapper[4593]: I0129 12:16:14.122274 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb_c3e4e3e3-1994-40a5-bab8-d84db2f44ddb/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:16:14 crc kubenswrapper[4593]: I0129 12:16:14.157822 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-lz46t_b1f286ec-6f85-44c4-94f5-f66bc21c2a64/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:16:14 crc kubenswrapper[4593]: I0129 12:16:14.329538 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-cfk97_c22e1d76-6585-46e2-9c31-5c002e021882/ssh-known-hosts-edpm-deployment/0.log" Jan 29 12:16:14 crc kubenswrapper[4593]: I0129 12:16:14.435690 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-58d6d94967-wdzcg_f1bc6621-0892-452c-9f95-54554f8c6e68/proxy-server/0.log" Jan 29 12:16:14 crc kubenswrapper[4593]: I0129 12:16:14.547311 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-58d6d94967-wdzcg_f1bc6621-0892-452c-9f95-54554f8c6e68/proxy-httpd/0.log" Jan 29 12:16:15 crc kubenswrapper[4593]: I0129 12:16:15.036430 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-jbnzf_4d1e7e96-e120-43f1-bff0-ea3d624e621b/swift-ring-rebalance/0.log" Jan 29 12:16:15 crc kubenswrapper[4593]: I0129 12:16:15.142454 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/account-reaper/0.log" Jan 29 12:16:15 crc kubenswrapper[4593]: I0129 12:16:15.178319 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/account-auditor/0.log" Jan 29 12:16:15 crc kubenswrapper[4593]: I0129 12:16:15.259093 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/account-replicator/0.log" Jan 29 12:16:15 crc kubenswrapper[4593]: I0129 12:16:15.321684 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/account-server/0.log" Jan 29 12:16:15 crc kubenswrapper[4593]: I0129 12:16:15.357711 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/container-auditor/0.log" Jan 29 12:16:15 crc kubenswrapper[4593]: I0129 12:16:15.457387 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/container-server/0.log" Jan 29 12:16:15 crc kubenswrapper[4593]: I0129 12:16:15.458518 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/container-updater/0.log" Jan 29 12:16:15 crc kubenswrapper[4593]: I0129 12:16:15.491436 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/container-replicator/0.log" Jan 29 12:16:15 crc kubenswrapper[4593]: I0129 12:16:15.598899 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/object-expirer/0.log" Jan 29 12:16:15 crc kubenswrapper[4593]: I0129 12:16:15.621593 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/object-auditor/0.log" Jan 29 12:16:15 crc kubenswrapper[4593]: I0129 12:16:15.747330 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/object-updater/0.log" Jan 29 12:16:15 crc kubenswrapper[4593]: I0129 12:16:15.765200 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/object-replicator/0.log" Jan 29 12:16:15 crc kubenswrapper[4593]: I0129 12:16:15.786046 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/object-server/0.log" Jan 29 12:16:15 crc kubenswrapper[4593]: I0129 12:16:15.868235 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/swift-recon-cron/0.log" Jan 29 12:16:15 crc kubenswrapper[4593]: I0129 12:16:15.875330 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/rsync/0.log" Jan 29 12:16:16 crc kubenswrapper[4593]: I0129 12:16:16.115400 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz_ee0ea7fe-3ea4-4944-8101-b03f1566882f/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:16:16 crc kubenswrapper[4593]: I0129 12:16:16.143453 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_d5ea9892-a149-4cfe-bb9c-ef636eacd125/tempest-tests-tempest-tests-runner/0.log" Jan 29 12:16:16 crc kubenswrapper[4593]: I0129 12:16:16.294985 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_be3a2ae9-6f0e-459e-bd91-10a92871767c/test-operator-logs-container/0.log" Jan 29 12:16:16 crc kubenswrapper[4593]: I0129 12:16:16.347537 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p_0f5fb9be-3781-4b9a-96d8-705593907345/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:16:21 crc kubenswrapper[4593]: I0129 12:16:21.077674 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:16:21 crc kubenswrapper[4593]: E0129 12:16:21.078448 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:16:32 crc kubenswrapper[4593]: I0129 12:16:32.075395 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:16:32 crc kubenswrapper[4593]: E0129 12:16:32.076381 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:16:44 crc kubenswrapper[4593]: I0129 12:16:44.853032 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc_d389d4ca-e0e5-4a15-8ff2-afa4745998fa/util/0.log" Jan 29 12:16:45 crc kubenswrapper[4593]: I0129 12:16:45.127778 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc_d389d4ca-e0e5-4a15-8ff2-afa4745998fa/util/0.log" Jan 29 12:16:45 crc kubenswrapper[4593]: I0129 12:16:45.141027 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc_d389d4ca-e0e5-4a15-8ff2-afa4745998fa/pull/0.log" Jan 29 12:16:45 crc kubenswrapper[4593]: I0129 12:16:45.183869 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc_d389d4ca-e0e5-4a15-8ff2-afa4745998fa/pull/0.log" Jan 29 12:16:45 crc kubenswrapper[4593]: I0129 12:16:45.353441 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc_d389d4ca-e0e5-4a15-8ff2-afa4745998fa/util/0.log" Jan 29 12:16:45 crc kubenswrapper[4593]: I0129 12:16:45.377312 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc_d389d4ca-e0e5-4a15-8ff2-afa4745998fa/pull/0.log" Jan 29 12:16:45 crc kubenswrapper[4593]: I0129 12:16:45.382508 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc_d389d4ca-e0e5-4a15-8ff2-afa4745998fa/extract/0.log" Jan 29 12:16:45 crc kubenswrapper[4593]: I0129 12:16:45.860037 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-7hmqc_e35e9127-0337-4860-b938-bb477a408f1e/manager/0.log" Jan 29 12:16:45 crc kubenswrapper[4593]: I0129 12:16:45.922579 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-7ns7q_c5e6d3a8-d6d9-4445-9708-28b88928333e/manager/0.log" Jan 29 12:16:46 crc kubenswrapper[4593]: I0129 12:16:46.076343 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:16:46 crc kubenswrapper[4593]: E0129 12:16:46.076998 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:16:46 crc kubenswrapper[4593]: I0129 12:16:46.369539 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-2ml7m_499923d8-4593-4225-bc4c-6166003a0bb3/manager/0.log" Jan 29 12:16:46 crc kubenswrapper[4593]: I0129 12:16:46.385675 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-xw2pz_734187ee-1e17-4cdc-b3bb-cfbd6e424793/manager/0.log" Jan 29 12:16:46 crc kubenswrapper[4593]: I0129 12:16:46.569517 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-xqcrc_50471b23-1d0d-4bd9-a66f-a89b3a39a612/manager/0.log" Jan 29 12:16:46 crc kubenswrapper[4593]: I0129 12:16:46.597105 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-98l2v_50a8381e-e59b-4400-9209-c33ef4f99c23/manager/0.log" Jan 29 12:16:46 crc kubenswrapper[4593]: I0129 12:16:46.922289 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-t584q_812ebcfb-766d-4a1b-aaaa-2dd5a96ce047/manager/0.log" Jan 29 12:16:47 crc kubenswrapper[4593]: I0129 12:16:47.000587 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-6zkvt_c2cda883-37e6-4c21-b320-4962ffdc98b3/manager/0.log" Jan 29 12:16:47 crc kubenswrapper[4593]: I0129 12:16:47.211260 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-c89cq_0881deda-c42a-48d8-9059-b7eaf66c0f9f/manager/0.log" Jan 29 12:16:47 crc kubenswrapper[4593]: I0129 12:16:47.217885 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-xf5fn_cdb96936-cd34-44fd-94b5-5570688fdfe6/manager/0.log" Jan 29 12:16:48 crc kubenswrapper[4593]: I0129 12:16:48.175648 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-qt87l_336c4e93-7d0b-4570-aafc-22e0f812db12/manager/0.log" Jan 29 12:16:48 crc kubenswrapper[4593]: I0129 12:16:48.223758 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-zx6r8_62efedcb-a194-4692-8e84-a0da7777a400/manager/0.log" Jan 29 12:16:48 crc kubenswrapper[4593]: I0129 12:16:48.434679 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-8kf6p_40ab1792-0354-4c78-ac44-a217fbc610a9/manager/0.log" Jan 29 12:16:48 crc kubenswrapper[4593]: I0129 12:16:48.507083 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-9dbds_ba6fb45a-59ff-42ee-acb0-0ee43d001e1e/manager/0.log" Jan 29 12:16:48 crc kubenswrapper[4593]: I0129 12:16:48.740652 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb_f6e2fc57-0cce-4f5a-bf3e-63efbfff1073/manager/0.log" Jan 29 12:16:48 crc kubenswrapper[4593]: I0129 12:16:48.915663 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-55ccc59995-d7xm7_c8e623f1-2830-4c78-b17a-6000f32978a3/operator/0.log" Jan 29 12:16:49 crc kubenswrapper[4593]: I0129 12:16:49.263709 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-sbxwt_0661b605-afb6-4341-9703-ea25a3afc19d/registry-server/0.log" Jan 29 12:16:49 crc kubenswrapper[4593]: I0129 12:16:49.677134 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-kttv8_2c7ec826-43f0-49f3-9d96-4330427e4ed9/manager/0.log" Jan 29 12:16:49 crc kubenswrapper[4593]: I0129 12:16:49.681757 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-885pn_9b88fe2c-a82a-4284-961a-8af3818815ec/manager/0.log" Jan 29 12:16:49 crc kubenswrapper[4593]: I0129 12:16:49.996492 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-k4b7q_0e86fa54-1e41-4bb9-86c7-a0ea0d919270/manager/0.log" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.001900 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-tfkk2_2f32633b-0490-4885-9543-a140c807c742/operator/0.log" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.115790 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6d898fd894-sh94p_960bb326-dc22-43e5-bc4f-05c9ce9e26a9/manager/0.log" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.477012 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zvj4k"] Jan 29 12:16:50 crc kubenswrapper[4593]: E0129 12:16:50.477367 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c41e742a-4985-4b87-8a5b-6a7586971569" containerName="container-00" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.477379 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="c41e742a-4985-4b87-8a5b-6a7586971569" containerName="container-00" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.477571 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="c41e742a-4985-4b87-8a5b-6a7586971569" containerName="container-00" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.478833 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.499741 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zvj4k"] Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.538952 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzjsx\" (UniqueName: \"kubernetes.io/projected/3950981d-ad0a-47e1-b5a2-da040c9c3e49-kube-api-access-lzjsx\") pod \"redhat-operators-zvj4k\" (UID: \"3950981d-ad0a-47e1-b5a2-da040c9c3e49\") " pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.539004 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3950981d-ad0a-47e1-b5a2-da040c9c3e49-catalog-content\") pod \"redhat-operators-zvj4k\" (UID: \"3950981d-ad0a-47e1-b5a2-da040c9c3e49\") " pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.539094 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3950981d-ad0a-47e1-b5a2-da040c9c3e49-utilities\") pod \"redhat-operators-zvj4k\" (UID: \"3950981d-ad0a-47e1-b5a2-da040c9c3e49\") " pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.646797 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzjsx\" (UniqueName: \"kubernetes.io/projected/3950981d-ad0a-47e1-b5a2-da040c9c3e49-kube-api-access-lzjsx\") pod \"redhat-operators-zvj4k\" (UID: \"3950981d-ad0a-47e1-b5a2-da040c9c3e49\") " pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.646857 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3950981d-ad0a-47e1-b5a2-da040c9c3e49-catalog-content\") pod \"redhat-operators-zvj4k\" (UID: \"3950981d-ad0a-47e1-b5a2-da040c9c3e49\") " pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.646957 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3950981d-ad0a-47e1-b5a2-da040c9c3e49-utilities\") pod \"redhat-operators-zvj4k\" (UID: \"3950981d-ad0a-47e1-b5a2-da040c9c3e49\") " pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.647520 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3950981d-ad0a-47e1-b5a2-da040c9c3e49-utilities\") pod \"redhat-operators-zvj4k\" (UID: \"3950981d-ad0a-47e1-b5a2-da040c9c3e49\") " pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.648133 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3950981d-ad0a-47e1-b5a2-da040c9c3e49-catalog-content\") pod \"redhat-operators-zvj4k\" (UID: \"3950981d-ad0a-47e1-b5a2-da040c9c3e49\") " pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.684805 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzjsx\" (UniqueName: \"kubernetes.io/projected/3950981d-ad0a-47e1-b5a2-da040c9c3e49-kube-api-access-lzjsx\") pod \"redhat-operators-zvj4k\" (UID: \"3950981d-ad0a-47e1-b5a2-da040c9c3e49\") " pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.764058 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-z4mp8_ea8d9bb8-bdec-453d-a308-28b962971254/manager/0.log" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.796080 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.881352 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-ltfr4_b45fb247-850e-40b4-b62e-8551d55efcba/manager/0.log" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.987204 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-zmssx_0259a320-8da9-48e5-8d73-25b09774d9c1/manager/0.log" Jan 29 12:16:51 crc kubenswrapper[4593]: I0129 12:16:51.323659 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zvj4k"] Jan 29 12:16:51 crc kubenswrapper[4593]: I0129 12:16:51.504458 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvj4k" event={"ID":"3950981d-ad0a-47e1-b5a2-da040c9c3e49","Type":"ContainerStarted","Data":"332780bb5ef29b3dd0853836a33ab4697026e10c50ef91e921d4a17666a2c402"} Jan 29 12:16:52 crc kubenswrapper[4593]: I0129 12:16:52.515248 4593 generic.go:334] "Generic (PLEG): container finished" podID="3950981d-ad0a-47e1-b5a2-da040c9c3e49" containerID="3b8e38a89d9a46d1986494a648468a2e3f120a9158adfe071e37653dcbf89f23" exitCode=0 Jan 29 12:16:52 crc kubenswrapper[4593]: I0129 12:16:52.515284 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvj4k" event={"ID":"3950981d-ad0a-47e1-b5a2-da040c9c3e49","Type":"ContainerDied","Data":"3b8e38a89d9a46d1986494a648468a2e3f120a9158adfe071e37653dcbf89f23"} Jan 29 12:16:52 crc kubenswrapper[4593]: I0129 12:16:52.517510 4593 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 12:16:53 crc kubenswrapper[4593]: I0129 12:16:53.527767 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvj4k" event={"ID":"3950981d-ad0a-47e1-b5a2-da040c9c3e49","Type":"ContainerStarted","Data":"57c19851d986daa7ca568fca1eea28d39b6c5f81f046ce453505615f2577774c"} Jan 29 12:16:59 crc kubenswrapper[4593]: I0129 12:16:59.074949 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:16:59 crc kubenswrapper[4593]: E0129 12:16:59.075687 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:16:59 crc kubenswrapper[4593]: I0129 12:16:59.598301 4593 generic.go:334] "Generic (PLEG): container finished" podID="3950981d-ad0a-47e1-b5a2-da040c9c3e49" containerID="57c19851d986daa7ca568fca1eea28d39b6c5f81f046ce453505615f2577774c" exitCode=0 Jan 29 12:16:59 crc kubenswrapper[4593]: I0129 12:16:59.598384 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvj4k" event={"ID":"3950981d-ad0a-47e1-b5a2-da040c9c3e49","Type":"ContainerDied","Data":"57c19851d986daa7ca568fca1eea28d39b6c5f81f046ce453505615f2577774c"} Jan 29 12:17:00 crc kubenswrapper[4593]: I0129 12:17:00.609786 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvj4k" event={"ID":"3950981d-ad0a-47e1-b5a2-da040c9c3e49","Type":"ContainerStarted","Data":"ead35afda6b94383f8202b4c4320d9272303c14a494cbbb2916716e5b89d21d9"} Jan 29 12:17:00 crc kubenswrapper[4593]: I0129 12:17:00.640385 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zvj4k" podStartSLOduration=3.146868685 podStartE2EDuration="10.640365116s" podCreationTimestamp="2026-01-29 12:16:50 +0000 UTC" firstStartedPulling="2026-01-29 12:16:52.517092663 +0000 UTC m=+4678.390126854" lastFinishedPulling="2026-01-29 12:17:00.010589094 +0000 UTC m=+4685.883623285" observedRunningTime="2026-01-29 12:17:00.632682698 +0000 UTC m=+4686.505716909" watchObservedRunningTime="2026-01-29 12:17:00.640365116 +0000 UTC m=+4686.513399307" Jan 29 12:17:00 crc kubenswrapper[4593]: I0129 12:17:00.797100 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:17:00 crc kubenswrapper[4593]: I0129 12:17:00.797295 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:17:01 crc kubenswrapper[4593]: I0129 12:17:01.854734 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zvj4k" podUID="3950981d-ad0a-47e1-b5a2-da040c9c3e49" containerName="registry-server" probeResult="failure" output=< Jan 29 12:17:01 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:17:01 crc kubenswrapper[4593]: > Jan 29 12:17:10 crc kubenswrapper[4593]: I0129 12:17:10.075187 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:17:10 crc kubenswrapper[4593]: E0129 12:17:10.076000 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:17:11 crc kubenswrapper[4593]: I0129 12:17:11.843869 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zvj4k" podUID="3950981d-ad0a-47e1-b5a2-da040c9c3e49" containerName="registry-server" probeResult="failure" output=< Jan 29 12:17:11 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:17:11 crc kubenswrapper[4593]: > Jan 29 12:17:17 crc kubenswrapper[4593]: I0129 12:17:17.098592 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-pf5p2_9bce548b-2c64-4ac5-a797-979de4cf7656/control-plane-machine-set-operator/0.log" Jan 29 12:17:17 crc kubenswrapper[4593]: I0129 12:17:17.404837 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vtdww_bb259eac-6aa7-42d9-883b-2af6b63af4b8/kube-rbac-proxy/0.log" Jan 29 12:17:17 crc kubenswrapper[4593]: I0129 12:17:17.431594 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vtdww_bb259eac-6aa7-42d9-883b-2af6b63af4b8/machine-api-operator/0.log" Jan 29 12:17:21 crc kubenswrapper[4593]: I0129 12:17:21.075082 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:17:21 crc kubenswrapper[4593]: E0129 12:17:21.076170 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:17:21 crc kubenswrapper[4593]: I0129 12:17:21.846547 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zvj4k" podUID="3950981d-ad0a-47e1-b5a2-da040c9c3e49" containerName="registry-server" probeResult="failure" output=< Jan 29 12:17:21 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:17:21 crc kubenswrapper[4593]: > Jan 29 12:17:30 crc kubenswrapper[4593]: I0129 12:17:30.862181 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:17:30 crc kubenswrapper[4593]: I0129 12:17:30.919357 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:17:31 crc kubenswrapper[4593]: I0129 12:17:31.111165 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zvj4k"] Jan 29 12:17:31 crc kubenswrapper[4593]: I0129 12:17:31.921023 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zvj4k" podUID="3950981d-ad0a-47e1-b5a2-da040c9c3e49" containerName="registry-server" containerID="cri-o://ead35afda6b94383f8202b4c4320d9272303c14a494cbbb2916716e5b89d21d9" gracePeriod=2 Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.432927 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.472945 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzjsx\" (UniqueName: \"kubernetes.io/projected/3950981d-ad0a-47e1-b5a2-da040c9c3e49-kube-api-access-lzjsx\") pod \"3950981d-ad0a-47e1-b5a2-da040c9c3e49\" (UID: \"3950981d-ad0a-47e1-b5a2-da040c9c3e49\") " Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.473051 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3950981d-ad0a-47e1-b5a2-da040c9c3e49-utilities\") pod \"3950981d-ad0a-47e1-b5a2-da040c9c3e49\" (UID: \"3950981d-ad0a-47e1-b5a2-da040c9c3e49\") " Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.473151 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3950981d-ad0a-47e1-b5a2-da040c9c3e49-catalog-content\") pod \"3950981d-ad0a-47e1-b5a2-da040c9c3e49\" (UID: \"3950981d-ad0a-47e1-b5a2-da040c9c3e49\") " Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.473774 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3950981d-ad0a-47e1-b5a2-da040c9c3e49-utilities" (OuterVolumeSpecName: "utilities") pod "3950981d-ad0a-47e1-b5a2-da040c9c3e49" (UID: "3950981d-ad0a-47e1-b5a2-da040c9c3e49"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.501941 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3950981d-ad0a-47e1-b5a2-da040c9c3e49-kube-api-access-lzjsx" (OuterVolumeSpecName: "kube-api-access-lzjsx") pod "3950981d-ad0a-47e1-b5a2-da040c9c3e49" (UID: "3950981d-ad0a-47e1-b5a2-da040c9c3e49"). InnerVolumeSpecName "kube-api-access-lzjsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.575483 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzjsx\" (UniqueName: \"kubernetes.io/projected/3950981d-ad0a-47e1-b5a2-da040c9c3e49-kube-api-access-lzjsx\") on node \"crc\" DevicePath \"\"" Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.575887 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3950981d-ad0a-47e1-b5a2-da040c9c3e49-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.698884 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3950981d-ad0a-47e1-b5a2-da040c9c3e49-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3950981d-ad0a-47e1-b5a2-da040c9c3e49" (UID: "3950981d-ad0a-47e1-b5a2-da040c9c3e49"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.801107 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3950981d-ad0a-47e1-b5a2-da040c9c3e49-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.894499 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-qhfhj_59d387c2-4d0b-4d6c-a0d8-2230657bebd0/cert-manager-controller/0.log" Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.935620 4593 generic.go:334] "Generic (PLEG): container finished" podID="3950981d-ad0a-47e1-b5a2-da040c9c3e49" containerID="ead35afda6b94383f8202b4c4320d9272303c14a494cbbb2916716e5b89d21d9" exitCode=0 Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.935680 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvj4k" event={"ID":"3950981d-ad0a-47e1-b5a2-da040c9c3e49","Type":"ContainerDied","Data":"ead35afda6b94383f8202b4c4320d9272303c14a494cbbb2916716e5b89d21d9"} Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.935712 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvj4k" event={"ID":"3950981d-ad0a-47e1-b5a2-da040c9c3e49","Type":"ContainerDied","Data":"332780bb5ef29b3dd0853836a33ab4697026e10c50ef91e921d4a17666a2c402"} Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.935731 4593 scope.go:117] "RemoveContainer" containerID="ead35afda6b94383f8202b4c4320d9272303c14a494cbbb2916716e5b89d21d9" Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.935904 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.973425 4593 scope.go:117] "RemoveContainer" containerID="57c19851d986daa7ca568fca1eea28d39b6c5f81f046ce453505615f2577774c" Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.990963 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zvj4k"] Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.998561 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zvj4k"] Jan 29 12:17:33 crc kubenswrapper[4593]: I0129 12:17:33.008868 4593 scope.go:117] "RemoveContainer" containerID="3b8e38a89d9a46d1986494a648468a2e3f120a9158adfe071e37653dcbf89f23" Jan 29 12:17:33 crc kubenswrapper[4593]: I0129 12:17:33.049781 4593 scope.go:117] "RemoveContainer" containerID="ead35afda6b94383f8202b4c4320d9272303c14a494cbbb2916716e5b89d21d9" Jan 29 12:17:33 crc kubenswrapper[4593]: E0129 12:17:33.054174 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ead35afda6b94383f8202b4c4320d9272303c14a494cbbb2916716e5b89d21d9\": container with ID starting with ead35afda6b94383f8202b4c4320d9272303c14a494cbbb2916716e5b89d21d9 not found: ID does not exist" containerID="ead35afda6b94383f8202b4c4320d9272303c14a494cbbb2916716e5b89d21d9" Jan 29 12:17:33 crc kubenswrapper[4593]: I0129 12:17:33.054382 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ead35afda6b94383f8202b4c4320d9272303c14a494cbbb2916716e5b89d21d9"} err="failed to get container status \"ead35afda6b94383f8202b4c4320d9272303c14a494cbbb2916716e5b89d21d9\": rpc error: code = NotFound desc = could not find container \"ead35afda6b94383f8202b4c4320d9272303c14a494cbbb2916716e5b89d21d9\": container with ID starting with ead35afda6b94383f8202b4c4320d9272303c14a494cbbb2916716e5b89d21d9 not found: ID does not exist" Jan 29 12:17:33 crc kubenswrapper[4593]: I0129 12:17:33.054481 4593 scope.go:117] "RemoveContainer" containerID="57c19851d986daa7ca568fca1eea28d39b6c5f81f046ce453505615f2577774c" Jan 29 12:17:33 crc kubenswrapper[4593]: E0129 12:17:33.054873 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57c19851d986daa7ca568fca1eea28d39b6c5f81f046ce453505615f2577774c\": container with ID starting with 57c19851d986daa7ca568fca1eea28d39b6c5f81f046ce453505615f2577774c not found: ID does not exist" containerID="57c19851d986daa7ca568fca1eea28d39b6c5f81f046ce453505615f2577774c" Jan 29 12:17:33 crc kubenswrapper[4593]: I0129 12:17:33.054926 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57c19851d986daa7ca568fca1eea28d39b6c5f81f046ce453505615f2577774c"} err="failed to get container status \"57c19851d986daa7ca568fca1eea28d39b6c5f81f046ce453505615f2577774c\": rpc error: code = NotFound desc = could not find container \"57c19851d986daa7ca568fca1eea28d39b6c5f81f046ce453505615f2577774c\": container with ID starting with 57c19851d986daa7ca568fca1eea28d39b6c5f81f046ce453505615f2577774c not found: ID does not exist" Jan 29 12:17:33 crc kubenswrapper[4593]: I0129 12:17:33.054954 4593 scope.go:117] "RemoveContainer" containerID="3b8e38a89d9a46d1986494a648468a2e3f120a9158adfe071e37653dcbf89f23" Jan 29 12:17:33 crc kubenswrapper[4593]: E0129 12:17:33.055180 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b8e38a89d9a46d1986494a648468a2e3f120a9158adfe071e37653dcbf89f23\": container with ID starting with 3b8e38a89d9a46d1986494a648468a2e3f120a9158adfe071e37653dcbf89f23 not found: ID does not exist" containerID="3b8e38a89d9a46d1986494a648468a2e3f120a9158adfe071e37653dcbf89f23" Jan 29 12:17:33 crc kubenswrapper[4593]: I0129 12:17:33.055211 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b8e38a89d9a46d1986494a648468a2e3f120a9158adfe071e37653dcbf89f23"} err="failed to get container status \"3b8e38a89d9a46d1986494a648468a2e3f120a9158adfe071e37653dcbf89f23\": rpc error: code = NotFound desc = could not find container \"3b8e38a89d9a46d1986494a648468a2e3f120a9158adfe071e37653dcbf89f23\": container with ID starting with 3b8e38a89d9a46d1986494a648468a2e3f120a9158adfe071e37653dcbf89f23 not found: ID does not exist" Jan 29 12:17:33 crc kubenswrapper[4593]: I0129 12:17:33.078936 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:17:33 crc kubenswrapper[4593]: E0129 12:17:33.079325 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:17:33 crc kubenswrapper[4593]: I0129 12:17:33.084838 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3950981d-ad0a-47e1-b5a2-da040c9c3e49" path="/var/lib/kubelet/pods/3950981d-ad0a-47e1-b5a2-da040c9c3e49/volumes" Jan 29 12:17:33 crc kubenswrapper[4593]: I0129 12:17:33.171262 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-lw7j7_79aa2cc5-a031-412d-a4c7-ba9251f84ed6/cert-manager-cainjector/0.log" Jan 29 12:17:33 crc kubenswrapper[4593]: I0129 12:17:33.219891 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-t7s4r_e2b5756a-c46e-4e76-90bf-0a5c7c1dc759/cert-manager-webhook/0.log" Jan 29 12:17:46 crc kubenswrapper[4593]: I0129 12:17:46.075854 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:17:46 crc kubenswrapper[4593]: E0129 12:17:46.076697 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:17:47 crc kubenswrapper[4593]: I0129 12:17:47.694465 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-nck62_2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2/nmstate-console-plugin/0.log" Jan 29 12:17:47 crc kubenswrapper[4593]: I0129 12:17:47.938591 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-q2995_7a32568f-244c-432b-8186-683e8bc10371/nmstate-metrics/0.log" Jan 29 12:17:48 crc kubenswrapper[4593]: I0129 12:17:48.029746 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-q2lbc_ea391d24-e32c-440b-b5c2-218920192125/nmstate-handler/0.log" Jan 29 12:17:48 crc kubenswrapper[4593]: I0129 12:17:48.037093 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-q2995_7a32568f-244c-432b-8186-683e8bc10371/kube-rbac-proxy/0.log" Jan 29 12:17:48 crc kubenswrapper[4593]: I0129 12:17:48.191914 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-xmhmc_b2e0c4ff-8a2b-474d-8414-a0026d61b11e/nmstate-operator/0.log" Jan 29 12:17:48 crc kubenswrapper[4593]: I0129 12:17:48.286532 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-47n46_72d4f068-dc20-44d0-aca6-c8f0992536e6/nmstate-webhook/0.log" Jan 29 12:17:59 crc kubenswrapper[4593]: I0129 12:17:59.079268 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:17:59 crc kubenswrapper[4593]: E0129 12:17:59.079981 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:18:13 crc kubenswrapper[4593]: I0129 12:18:13.076350 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:18:13 crc kubenswrapper[4593]: E0129 12:18:13.077227 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:18:17 crc kubenswrapper[4593]: I0129 12:18:17.124565 4593 scope.go:117] "RemoveContainer" containerID="d95a803073d6be732010713f64b21e2542e0573ccca5a3e98a37ffc8b97ffb0a" Jan 29 12:18:17 crc kubenswrapper[4593]: I0129 12:18:17.157510 4593 scope.go:117] "RemoveContainer" containerID="e974cfd4ba99c10cc2aad6fe3294ee279ef945d78da77b5575efff84d75dc3f5" Jan 29 12:18:17 crc kubenswrapper[4593]: I0129 12:18:17.196978 4593 scope.go:117] "RemoveContainer" containerID="9721d75f517671802e10383aaf0d51740b457133fabbb1bb0666df1729b46536" Jan 29 12:18:23 crc kubenswrapper[4593]: I0129 12:18:23.578190 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-hvqbg_3462ad7c-24f3-4c73-990d-a0f471d08d1d/kube-rbac-proxy/0.log" Jan 29 12:18:23 crc kubenswrapper[4593]: I0129 12:18:23.722622 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-hvqbg_3462ad7c-24f3-4c73-990d-a0f471d08d1d/controller/0.log" Jan 29 12:18:23 crc kubenswrapper[4593]: I0129 12:18:23.772268 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-frr-files/0.log" Jan 29 12:18:24 crc kubenswrapper[4593]: I0129 12:18:24.129245 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-reloader/0.log" Jan 29 12:18:24 crc kubenswrapper[4593]: I0129 12:18:24.182399 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-metrics/0.log" Jan 29 12:18:24 crc kubenswrapper[4593]: I0129 12:18:24.186730 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-frr-files/0.log" Jan 29 12:18:24 crc kubenswrapper[4593]: I0129 12:18:24.251603 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-reloader/0.log" Jan 29 12:18:24 crc kubenswrapper[4593]: I0129 12:18:24.415766 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-frr-files/0.log" Jan 29 12:18:24 crc kubenswrapper[4593]: I0129 12:18:24.517231 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-reloader/0.log" Jan 29 12:18:24 crc kubenswrapper[4593]: I0129 12:18:24.537566 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-metrics/0.log" Jan 29 12:18:24 crc kubenswrapper[4593]: I0129 12:18:24.560503 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-metrics/0.log" Jan 29 12:18:24 crc kubenswrapper[4593]: I0129 12:18:24.785622 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-frr-files/0.log" Jan 29 12:18:24 crc kubenswrapper[4593]: I0129 12:18:24.808555 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-metrics/0.log" Jan 29 12:18:24 crc kubenswrapper[4593]: I0129 12:18:24.853250 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/controller/0.log" Jan 29 12:18:24 crc kubenswrapper[4593]: I0129 12:18:24.879539 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-reloader/0.log" Jan 29 12:18:25 crc kubenswrapper[4593]: I0129 12:18:25.038748 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/frr-metrics/0.log" Jan 29 12:18:25 crc kubenswrapper[4593]: I0129 12:18:25.081799 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:18:25 crc kubenswrapper[4593]: E0129 12:18:25.082133 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:18:25 crc kubenswrapper[4593]: I0129 12:18:25.160204 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/kube-rbac-proxy-frr/0.log" Jan 29 12:18:25 crc kubenswrapper[4593]: I0129 12:18:25.234847 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/kube-rbac-proxy/0.log" Jan 29 12:18:25 crc kubenswrapper[4593]: I0129 12:18:25.482235 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/reloader/0.log" Jan 29 12:18:25 crc kubenswrapper[4593]: I0129 12:18:25.654578 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-dj42h_45d808cf-80c4-4f7b-a172-76e4ecd9e37b/frr-k8s-webhook-server/0.log" Jan 29 12:18:25 crc kubenswrapper[4593]: I0129 12:18:25.990426 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5bf4d9f4bd-ll9bk_421156e9-d8d3-4112-bd58-d09c40a70a12/manager/0.log" Jan 29 12:18:26 crc kubenswrapper[4593]: I0129 12:18:26.133022 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7fdc78c47c-w2tv4_c3381187-83f6-4877-8d72-3ed30f360a86/webhook-server/0.log" Jan 29 12:18:26 crc kubenswrapper[4593]: I0129 12:18:26.439835 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-m77zw_37969e5d-3111-45cc-a711-da443a473c52/kube-rbac-proxy/0.log" Jan 29 12:18:26 crc kubenswrapper[4593]: I0129 12:18:26.477439 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/frr/0.log" Jan 29 12:18:26 crc kubenswrapper[4593]: I0129 12:18:26.766893 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-m77zw_37969e5d-3111-45cc-a711-da443a473c52/speaker/0.log" Jan 29 12:18:39 crc kubenswrapper[4593]: I0129 12:18:39.078128 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:18:39 crc kubenswrapper[4593]: E0129 12:18:39.078889 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:18:42 crc kubenswrapper[4593]: I0129 12:18:42.050006 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz_ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11/util/0.log" Jan 29 12:18:42 crc kubenswrapper[4593]: I0129 12:18:42.376262 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz_ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11/util/0.log" Jan 29 12:18:42 crc kubenswrapper[4593]: I0129 12:18:42.443316 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz_ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11/pull/0.log" Jan 29 12:18:42 crc kubenswrapper[4593]: I0129 12:18:42.443513 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz_ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11/pull/0.log" Jan 29 12:18:42 crc kubenswrapper[4593]: I0129 12:18:42.489781 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz_ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11/util/0.log" Jan 29 12:18:42 crc kubenswrapper[4593]: I0129 12:18:42.587849 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz_ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11/pull/0.log" Jan 29 12:18:42 crc kubenswrapper[4593]: I0129 12:18:42.842708 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz_ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11/extract/0.log" Jan 29 12:18:42 crc kubenswrapper[4593]: I0129 12:18:42.868720 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w_b514f100-8029-41bf-9315-9e8c18a7238a/util/0.log" Jan 29 12:18:43 crc kubenswrapper[4593]: I0129 12:18:43.105188 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w_b514f100-8029-41bf-9315-9e8c18a7238a/util/0.log" Jan 29 12:18:43 crc kubenswrapper[4593]: I0129 12:18:43.112074 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w_b514f100-8029-41bf-9315-9e8c18a7238a/pull/0.log" Jan 29 12:18:43 crc kubenswrapper[4593]: I0129 12:18:43.116073 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w_b514f100-8029-41bf-9315-9e8c18a7238a/pull/0.log" Jan 29 12:18:43 crc kubenswrapper[4593]: I0129 12:18:43.286787 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w_b514f100-8029-41bf-9315-9e8c18a7238a/pull/0.log" Jan 29 12:18:43 crc kubenswrapper[4593]: I0129 12:18:43.325736 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w_b514f100-8029-41bf-9315-9e8c18a7238a/extract/0.log" Jan 29 12:18:43 crc kubenswrapper[4593]: I0129 12:18:43.356072 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w_b514f100-8029-41bf-9315-9e8c18a7238a/util/0.log" Jan 29 12:18:43 crc kubenswrapper[4593]: I0129 12:18:43.944686 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kt56h_f0d1455d-ba27-48f0-be57-3d8e91a63853/extract-utilities/0.log" Jan 29 12:18:44 crc kubenswrapper[4593]: I0129 12:18:44.199842 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kt56h_f0d1455d-ba27-48f0-be57-3d8e91a63853/extract-utilities/0.log" Jan 29 12:18:44 crc kubenswrapper[4593]: I0129 12:18:44.207117 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kt56h_f0d1455d-ba27-48f0-be57-3d8e91a63853/extract-content/0.log" Jan 29 12:18:44 crc kubenswrapper[4593]: I0129 12:18:44.207621 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kt56h_f0d1455d-ba27-48f0-be57-3d8e91a63853/extract-content/0.log" Jan 29 12:18:44 crc kubenswrapper[4593]: I0129 12:18:44.404943 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kt56h_f0d1455d-ba27-48f0-be57-3d8e91a63853/extract-content/0.log" Jan 29 12:18:44 crc kubenswrapper[4593]: I0129 12:18:44.438860 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kt56h_f0d1455d-ba27-48f0-be57-3d8e91a63853/extract-utilities/0.log" Jan 29 12:18:44 crc kubenswrapper[4593]: I0129 12:18:44.705561 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-57v5l_3ae70d27-10ec-4015-851d-d84aaf99d782/extract-utilities/0.log" Jan 29 12:18:45 crc kubenswrapper[4593]: I0129 12:18:45.132976 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kt56h_f0d1455d-ba27-48f0-be57-3d8e91a63853/registry-server/0.log" Jan 29 12:18:45 crc kubenswrapper[4593]: I0129 12:18:45.155661 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-57v5l_3ae70d27-10ec-4015-851d-d84aaf99d782/extract-utilities/0.log" Jan 29 12:18:45 crc kubenswrapper[4593]: I0129 12:18:45.170780 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-57v5l_3ae70d27-10ec-4015-851d-d84aaf99d782/extract-content/0.log" Jan 29 12:18:45 crc kubenswrapper[4593]: I0129 12:18:45.173796 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-57v5l_3ae70d27-10ec-4015-851d-d84aaf99d782/extract-content/0.log" Jan 29 12:18:45 crc kubenswrapper[4593]: I0129 12:18:45.370179 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-57v5l_3ae70d27-10ec-4015-851d-d84aaf99d782/extract-utilities/0.log" Jan 29 12:18:45 crc kubenswrapper[4593]: I0129 12:18:45.380594 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-57v5l_3ae70d27-10ec-4015-851d-d84aaf99d782/extract-content/0.log" Jan 29 12:18:45 crc kubenswrapper[4593]: I0129 12:18:45.578912 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-s2rlp_7a59fe58-c900-46ea-8ff2-8a7f49210dc3/marketplace-operator/0.log" Jan 29 12:18:45 crc kubenswrapper[4593]: I0129 12:18:45.720474 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2f96_69a313ce-b443-4080-9eea-bde0c61dc59d/extract-utilities/0.log" Jan 29 12:18:45 crc kubenswrapper[4593]: I0129 12:18:45.970426 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2f96_69a313ce-b443-4080-9eea-bde0c61dc59d/extract-utilities/0.log" Jan 29 12:18:45 crc kubenswrapper[4593]: I0129 12:18:45.970427 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2f96_69a313ce-b443-4080-9eea-bde0c61dc59d/extract-content/0.log" Jan 29 12:18:46 crc kubenswrapper[4593]: I0129 12:18:46.029403 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-57v5l_3ae70d27-10ec-4015-851d-d84aaf99d782/registry-server/0.log" Jan 29 12:18:46 crc kubenswrapper[4593]: I0129 12:18:46.073532 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2f96_69a313ce-b443-4080-9eea-bde0c61dc59d/extract-content/0.log" Jan 29 12:18:46 crc kubenswrapper[4593]: I0129 12:18:46.217932 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2f96_69a313ce-b443-4080-9eea-bde0c61dc59d/extract-utilities/0.log" Jan 29 12:18:46 crc kubenswrapper[4593]: I0129 12:18:46.235356 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2f96_69a313ce-b443-4080-9eea-bde0c61dc59d/extract-content/0.log" Jan 29 12:18:46 crc kubenswrapper[4593]: I0129 12:18:46.330484 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vbjtl_954251cb-5bea-456e-8d36-27eda2fe92d6/extract-utilities/0.log" Jan 29 12:18:47 crc kubenswrapper[4593]: I0129 12:18:46.502591 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vbjtl_954251cb-5bea-456e-8d36-27eda2fe92d6/extract-content/0.log" Jan 29 12:18:47 crc kubenswrapper[4593]: I0129 12:18:46.522777 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vbjtl_954251cb-5bea-456e-8d36-27eda2fe92d6/extract-content/0.log" Jan 29 12:18:47 crc kubenswrapper[4593]: I0129 12:18:46.724867 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vbjtl_954251cb-5bea-456e-8d36-27eda2fe92d6/extract-utilities/0.log" Jan 29 12:18:47 crc kubenswrapper[4593]: I0129 12:18:46.729844 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vbjtl_954251cb-5bea-456e-8d36-27eda2fe92d6/extract-content/0.log" Jan 29 12:18:47 crc kubenswrapper[4593]: I0129 12:18:47.139287 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vbjtl_954251cb-5bea-456e-8d36-27eda2fe92d6/extract-utilities/0.log" Jan 29 12:18:47 crc kubenswrapper[4593]: I0129 12:18:47.301641 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2f96_69a313ce-b443-4080-9eea-bde0c61dc59d/registry-server/0.log" Jan 29 12:18:47 crc kubenswrapper[4593]: I0129 12:18:47.774912 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vbjtl_954251cb-5bea-456e-8d36-27eda2fe92d6/registry-server/0.log" Jan 29 12:18:53 crc kubenswrapper[4593]: I0129 12:18:53.075449 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:18:53 crc kubenswrapper[4593]: E0129 12:18:53.076198 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:18:54 crc kubenswrapper[4593]: I0129 12:18:54.771512 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="8581bb16-8d35-4521-8886-3c71554a3a4d" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 29 12:18:56 crc kubenswrapper[4593]: I0129 12:18:56.852828 4593 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-47n46 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.32:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 12:18:56 crc kubenswrapper[4593]: I0129 12:18:56.853344 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46" podUID="72d4f068-dc20-44d0-aca6-c8f0992536e6" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.32:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 12:19:08 crc kubenswrapper[4593]: I0129 12:19:08.075327 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:19:08 crc kubenswrapper[4593]: E0129 12:19:08.076177 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:19:19 crc kubenswrapper[4593]: I0129 12:19:19.083983 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:19:19 crc kubenswrapper[4593]: E0129 12:19:19.084944 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:19:31 crc kubenswrapper[4593]: I0129 12:19:31.075168 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:19:31 crc kubenswrapper[4593]: E0129 12:19:31.076069 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:19:43 crc kubenswrapper[4593]: I0129 12:19:43.083263 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:19:43 crc kubenswrapper[4593]: E0129 12:19:43.084218 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:19:58 crc kubenswrapper[4593]: I0129 12:19:58.076013 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:19:58 crc kubenswrapper[4593]: E0129 12:19:58.076940 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:20:11 crc kubenswrapper[4593]: I0129 12:20:11.075848 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:20:12 crc kubenswrapper[4593]: I0129 12:20:12.119123 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"0c951b718f5f8a81543c1227b8e681ac1add853c973a503786430be2a5132d27"} Jan 29 12:21:17 crc kubenswrapper[4593]: I0129 12:21:17.321372 4593 scope.go:117] "RemoveContainer" containerID="71a0e35a9b97791cdb2e7a3a0e49f82c96b3918bca79faeaea9323664e2cf8c6" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.613870 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fhwxm"] Jan 29 12:21:33 crc kubenswrapper[4593]: E0129 12:21:33.615339 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3950981d-ad0a-47e1-b5a2-da040c9c3e49" containerName="extract-utilities" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.615369 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="3950981d-ad0a-47e1-b5a2-da040c9c3e49" containerName="extract-utilities" Jan 29 12:21:33 crc kubenswrapper[4593]: E0129 12:21:33.615382 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3950981d-ad0a-47e1-b5a2-da040c9c3e49" containerName="extract-content" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.615389 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="3950981d-ad0a-47e1-b5a2-da040c9c3e49" containerName="extract-content" Jan 29 12:21:33 crc kubenswrapper[4593]: E0129 12:21:33.615415 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3950981d-ad0a-47e1-b5a2-da040c9c3e49" containerName="registry-server" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.615424 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="3950981d-ad0a-47e1-b5a2-da040c9c3e49" containerName="registry-server" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.615662 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="3950981d-ad0a-47e1-b5a2-da040c9c3e49" containerName="registry-server" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.617054 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.662143 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fhwxm"] Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.783763 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cx7z\" (UniqueName: \"kubernetes.io/projected/544e38ca-9cdb-4ca1-82b9-dd6290b12428-kube-api-access-7cx7z\") pod \"certified-operators-fhwxm\" (UID: \"544e38ca-9cdb-4ca1-82b9-dd6290b12428\") " pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.783895 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/544e38ca-9cdb-4ca1-82b9-dd6290b12428-catalog-content\") pod \"certified-operators-fhwxm\" (UID: \"544e38ca-9cdb-4ca1-82b9-dd6290b12428\") " pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.783958 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/544e38ca-9cdb-4ca1-82b9-dd6290b12428-utilities\") pod \"certified-operators-fhwxm\" (UID: \"544e38ca-9cdb-4ca1-82b9-dd6290b12428\") " pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.885691 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cx7z\" (UniqueName: \"kubernetes.io/projected/544e38ca-9cdb-4ca1-82b9-dd6290b12428-kube-api-access-7cx7z\") pod \"certified-operators-fhwxm\" (UID: \"544e38ca-9cdb-4ca1-82b9-dd6290b12428\") " pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.885781 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/544e38ca-9cdb-4ca1-82b9-dd6290b12428-catalog-content\") pod \"certified-operators-fhwxm\" (UID: \"544e38ca-9cdb-4ca1-82b9-dd6290b12428\") " pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.885817 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/544e38ca-9cdb-4ca1-82b9-dd6290b12428-utilities\") pod \"certified-operators-fhwxm\" (UID: \"544e38ca-9cdb-4ca1-82b9-dd6290b12428\") " pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.886324 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/544e38ca-9cdb-4ca1-82b9-dd6290b12428-utilities\") pod \"certified-operators-fhwxm\" (UID: \"544e38ca-9cdb-4ca1-82b9-dd6290b12428\") " pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.886675 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/544e38ca-9cdb-4ca1-82b9-dd6290b12428-catalog-content\") pod \"certified-operators-fhwxm\" (UID: \"544e38ca-9cdb-4ca1-82b9-dd6290b12428\") " pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.909502 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cx7z\" (UniqueName: \"kubernetes.io/projected/544e38ca-9cdb-4ca1-82b9-dd6290b12428-kube-api-access-7cx7z\") pod \"certified-operators-fhwxm\" (UID: \"544e38ca-9cdb-4ca1-82b9-dd6290b12428\") " pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.945210 4593 generic.go:334] "Generic (PLEG): container finished" podID="006cda43-0b58-4970-bcf0-c355509620f8" containerID="46cdce02a2dbb7b4a939e2cdd7a751400cc8c8329f7b96782ad4b1979b724c76" exitCode=0 Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.945293 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zc4pg/must-gather-htdlp" event={"ID":"006cda43-0b58-4970-bcf0-c355509620f8","Type":"ContainerDied","Data":"46cdce02a2dbb7b4a939e2cdd7a751400cc8c8329f7b96782ad4b1979b724c76"} Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.946048 4593 scope.go:117] "RemoveContainer" containerID="46cdce02a2dbb7b4a939e2cdd7a751400cc8c8329f7b96782ad4b1979b724c76" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.963213 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:34 crc kubenswrapper[4593]: I0129 12:21:34.647672 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fhwxm"] Jan 29 12:21:34 crc kubenswrapper[4593]: I0129 12:21:34.775194 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zc4pg_must-gather-htdlp_006cda43-0b58-4970-bcf0-c355509620f8/gather/0.log" Jan 29 12:21:34 crc kubenswrapper[4593]: I0129 12:21:34.957515 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fhwxm" event={"ID":"544e38ca-9cdb-4ca1-82b9-dd6290b12428","Type":"ContainerStarted","Data":"4e2df6b4721a8d473a96f101f336863a5ef2eb9c2ef8535919425422543b4bda"} Jan 29 12:21:34 crc kubenswrapper[4593]: I0129 12:21:34.957560 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fhwxm" event={"ID":"544e38ca-9cdb-4ca1-82b9-dd6290b12428","Type":"ContainerStarted","Data":"e20ec106468d262fa4bc5b0870a4ccc7cc66d00dbc9cc0aea978c890696a3eae"} Jan 29 12:21:36 crc kubenswrapper[4593]: I0129 12:21:36.000434 4593 generic.go:334] "Generic (PLEG): container finished" podID="544e38ca-9cdb-4ca1-82b9-dd6290b12428" containerID="4e2df6b4721a8d473a96f101f336863a5ef2eb9c2ef8535919425422543b4bda" exitCode=0 Jan 29 12:21:36 crc kubenswrapper[4593]: I0129 12:21:36.000847 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fhwxm" event={"ID":"544e38ca-9cdb-4ca1-82b9-dd6290b12428","Type":"ContainerDied","Data":"4e2df6b4721a8d473a96f101f336863a5ef2eb9c2ef8535919425422543b4bda"} Jan 29 12:21:37 crc kubenswrapper[4593]: I0129 12:21:37.013427 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fhwxm" event={"ID":"544e38ca-9cdb-4ca1-82b9-dd6290b12428","Type":"ContainerStarted","Data":"7d1ea073d8cea1ae501e5f4b6fc119c0435003af8966c47bea4400a1082dae38"} Jan 29 12:21:39 crc kubenswrapper[4593]: I0129 12:21:39.033217 4593 generic.go:334] "Generic (PLEG): container finished" podID="544e38ca-9cdb-4ca1-82b9-dd6290b12428" containerID="7d1ea073d8cea1ae501e5f4b6fc119c0435003af8966c47bea4400a1082dae38" exitCode=0 Jan 29 12:21:39 crc kubenswrapper[4593]: I0129 12:21:39.033292 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fhwxm" event={"ID":"544e38ca-9cdb-4ca1-82b9-dd6290b12428","Type":"ContainerDied","Data":"7d1ea073d8cea1ae501e5f4b6fc119c0435003af8966c47bea4400a1082dae38"} Jan 29 12:21:40 crc kubenswrapper[4593]: I0129 12:21:40.046783 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fhwxm" event={"ID":"544e38ca-9cdb-4ca1-82b9-dd6290b12428","Type":"ContainerStarted","Data":"15bd3dfc93df9578c6c17be7ac613b236f76f8886d45c782d3038d688f30e718"} Jan 29 12:21:40 crc kubenswrapper[4593]: I0129 12:21:40.070551 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fhwxm" podStartSLOduration=3.3962060530000002 podStartE2EDuration="7.070505955s" podCreationTimestamp="2026-01-29 12:21:33 +0000 UTC" firstStartedPulling="2026-01-29 12:21:36.004915946 +0000 UTC m=+4961.877950137" lastFinishedPulling="2026-01-29 12:21:39.679215848 +0000 UTC m=+4965.552250039" observedRunningTime="2026-01-29 12:21:40.067190965 +0000 UTC m=+4965.940225156" watchObservedRunningTime="2026-01-29 12:21:40.070505955 +0000 UTC m=+4965.943540146" Jan 29 12:21:43 crc kubenswrapper[4593]: I0129 12:21:43.963996 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:43 crc kubenswrapper[4593]: I0129 12:21:43.964702 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:44 crc kubenswrapper[4593]: I0129 12:21:44.023448 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:44 crc kubenswrapper[4593]: I0129 12:21:44.134330 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:44 crc kubenswrapper[4593]: I0129 12:21:44.324434 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zc4pg/must-gather-htdlp"] Jan 29 12:21:44 crc kubenswrapper[4593]: I0129 12:21:44.324886 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-zc4pg/must-gather-htdlp" podUID="006cda43-0b58-4970-bcf0-c355509620f8" containerName="copy" containerID="cri-o://0a2615ec02f7acf6e4eef7d334633a655b2c7f91120bb732e5f28991053841a5" gracePeriod=2 Jan 29 12:21:44 crc kubenswrapper[4593]: I0129 12:21:44.337030 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zc4pg/must-gather-htdlp"] Jan 29 12:21:44 crc kubenswrapper[4593]: I0129 12:21:44.819158 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zc4pg_must-gather-htdlp_006cda43-0b58-4970-bcf0-c355509620f8/copy/0.log" Jan 29 12:21:44 crc kubenswrapper[4593]: I0129 12:21:44.820037 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zc4pg/must-gather-htdlp" Jan 29 12:21:44 crc kubenswrapper[4593]: I0129 12:21:44.943566 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lln5t\" (UniqueName: \"kubernetes.io/projected/006cda43-0b58-4970-bcf0-c355509620f8-kube-api-access-lln5t\") pod \"006cda43-0b58-4970-bcf0-c355509620f8\" (UID: \"006cda43-0b58-4970-bcf0-c355509620f8\") " Jan 29 12:21:44 crc kubenswrapper[4593]: I0129 12:21:44.948039 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/006cda43-0b58-4970-bcf0-c355509620f8-must-gather-output\") pod \"006cda43-0b58-4970-bcf0-c355509620f8\" (UID: \"006cda43-0b58-4970-bcf0-c355509620f8\") " Jan 29 12:21:44 crc kubenswrapper[4593]: I0129 12:21:44.971917 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/006cda43-0b58-4970-bcf0-c355509620f8-kube-api-access-lln5t" (OuterVolumeSpecName: "kube-api-access-lln5t") pod "006cda43-0b58-4970-bcf0-c355509620f8" (UID: "006cda43-0b58-4970-bcf0-c355509620f8"). InnerVolumeSpecName "kube-api-access-lln5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:21:45 crc kubenswrapper[4593]: I0129 12:21:45.053190 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lln5t\" (UniqueName: \"kubernetes.io/projected/006cda43-0b58-4970-bcf0-c355509620f8-kube-api-access-lln5t\") on node \"crc\" DevicePath \"\"" Jan 29 12:21:45 crc kubenswrapper[4593]: I0129 12:21:45.109468 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zc4pg_must-gather-htdlp_006cda43-0b58-4970-bcf0-c355509620f8/copy/0.log" Jan 29 12:21:45 crc kubenswrapper[4593]: I0129 12:21:45.110121 4593 generic.go:334] "Generic (PLEG): container finished" podID="006cda43-0b58-4970-bcf0-c355509620f8" containerID="0a2615ec02f7acf6e4eef7d334633a655b2c7f91120bb732e5f28991053841a5" exitCode=143 Jan 29 12:21:45 crc kubenswrapper[4593]: I0129 12:21:45.111450 4593 scope.go:117] "RemoveContainer" containerID="0a2615ec02f7acf6e4eef7d334633a655b2c7f91120bb732e5f28991053841a5" Jan 29 12:21:45 crc kubenswrapper[4593]: I0129 12:21:45.111802 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zc4pg/must-gather-htdlp" Jan 29 12:21:45 crc kubenswrapper[4593]: I0129 12:21:45.194155 4593 scope.go:117] "RemoveContainer" containerID="46cdce02a2dbb7b4a939e2cdd7a751400cc8c8329f7b96782ad4b1979b724c76" Jan 29 12:21:45 crc kubenswrapper[4593]: I0129 12:21:45.195206 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fhwxm"] Jan 29 12:21:45 crc kubenswrapper[4593]: I0129 12:21:45.328443 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/006cda43-0b58-4970-bcf0-c355509620f8-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "006cda43-0b58-4970-bcf0-c355509620f8" (UID: "006cda43-0b58-4970-bcf0-c355509620f8"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:21:45 crc kubenswrapper[4593]: I0129 12:21:45.337445 4593 scope.go:117] "RemoveContainer" containerID="0a2615ec02f7acf6e4eef7d334633a655b2c7f91120bb732e5f28991053841a5" Jan 29 12:21:45 crc kubenswrapper[4593]: E0129 12:21:45.338480 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a2615ec02f7acf6e4eef7d334633a655b2c7f91120bb732e5f28991053841a5\": container with ID starting with 0a2615ec02f7acf6e4eef7d334633a655b2c7f91120bb732e5f28991053841a5 not found: ID does not exist" containerID="0a2615ec02f7acf6e4eef7d334633a655b2c7f91120bb732e5f28991053841a5" Jan 29 12:21:45 crc kubenswrapper[4593]: I0129 12:21:45.338528 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a2615ec02f7acf6e4eef7d334633a655b2c7f91120bb732e5f28991053841a5"} err="failed to get container status \"0a2615ec02f7acf6e4eef7d334633a655b2c7f91120bb732e5f28991053841a5\": rpc error: code = NotFound desc = could not find container \"0a2615ec02f7acf6e4eef7d334633a655b2c7f91120bb732e5f28991053841a5\": container with ID starting with 0a2615ec02f7acf6e4eef7d334633a655b2c7f91120bb732e5f28991053841a5 not found: ID does not exist" Jan 29 12:21:45 crc kubenswrapper[4593]: I0129 12:21:45.338551 4593 scope.go:117] "RemoveContainer" containerID="46cdce02a2dbb7b4a939e2cdd7a751400cc8c8329f7b96782ad4b1979b724c76" Jan 29 12:21:45 crc kubenswrapper[4593]: E0129 12:21:45.342920 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46cdce02a2dbb7b4a939e2cdd7a751400cc8c8329f7b96782ad4b1979b724c76\": container with ID starting with 46cdce02a2dbb7b4a939e2cdd7a751400cc8c8329f7b96782ad4b1979b724c76 not found: ID does not exist" containerID="46cdce02a2dbb7b4a939e2cdd7a751400cc8c8329f7b96782ad4b1979b724c76" Jan 29 12:21:45 crc kubenswrapper[4593]: I0129 12:21:45.342969 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46cdce02a2dbb7b4a939e2cdd7a751400cc8c8329f7b96782ad4b1979b724c76"} err="failed to get container status \"46cdce02a2dbb7b4a939e2cdd7a751400cc8c8329f7b96782ad4b1979b724c76\": rpc error: code = NotFound desc = could not find container \"46cdce02a2dbb7b4a939e2cdd7a751400cc8c8329f7b96782ad4b1979b724c76\": container with ID starting with 46cdce02a2dbb7b4a939e2cdd7a751400cc8c8329f7b96782ad4b1979b724c76 not found: ID does not exist" Jan 29 12:21:45 crc kubenswrapper[4593]: I0129 12:21:45.377834 4593 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/006cda43-0b58-4970-bcf0-c355509620f8-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 29 12:21:46 crc kubenswrapper[4593]: I0129 12:21:46.120021 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fhwxm" podUID="544e38ca-9cdb-4ca1-82b9-dd6290b12428" containerName="registry-server" containerID="cri-o://15bd3dfc93df9578c6c17be7ac613b236f76f8886d45c782d3038d688f30e718" gracePeriod=2 Jan 29 12:21:47 crc kubenswrapper[4593]: I0129 12:21:47.089175 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="006cda43-0b58-4970-bcf0-c355509620f8" path="/var/lib/kubelet/pods/006cda43-0b58-4970-bcf0-c355509620f8/volumes" Jan 29 12:21:47 crc kubenswrapper[4593]: I0129 12:21:47.133671 4593 generic.go:334] "Generic (PLEG): container finished" podID="544e38ca-9cdb-4ca1-82b9-dd6290b12428" containerID="15bd3dfc93df9578c6c17be7ac613b236f76f8886d45c782d3038d688f30e718" exitCode=0 Jan 29 12:21:47 crc kubenswrapper[4593]: I0129 12:21:47.133738 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fhwxm" event={"ID":"544e38ca-9cdb-4ca1-82b9-dd6290b12428","Type":"ContainerDied","Data":"15bd3dfc93df9578c6c17be7ac613b236f76f8886d45c782d3038d688f30e718"} Jan 29 12:21:47 crc kubenswrapper[4593]: I0129 12:21:47.741801 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:47 crc kubenswrapper[4593]: I0129 12:21:47.924425 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/544e38ca-9cdb-4ca1-82b9-dd6290b12428-utilities\") pod \"544e38ca-9cdb-4ca1-82b9-dd6290b12428\" (UID: \"544e38ca-9cdb-4ca1-82b9-dd6290b12428\") " Jan 29 12:21:47 crc kubenswrapper[4593]: I0129 12:21:47.924574 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cx7z\" (UniqueName: \"kubernetes.io/projected/544e38ca-9cdb-4ca1-82b9-dd6290b12428-kube-api-access-7cx7z\") pod \"544e38ca-9cdb-4ca1-82b9-dd6290b12428\" (UID: \"544e38ca-9cdb-4ca1-82b9-dd6290b12428\") " Jan 29 12:21:47 crc kubenswrapper[4593]: I0129 12:21:47.924722 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/544e38ca-9cdb-4ca1-82b9-dd6290b12428-catalog-content\") pod \"544e38ca-9cdb-4ca1-82b9-dd6290b12428\" (UID: \"544e38ca-9cdb-4ca1-82b9-dd6290b12428\") " Jan 29 12:21:47 crc kubenswrapper[4593]: I0129 12:21:47.925492 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/544e38ca-9cdb-4ca1-82b9-dd6290b12428-utilities" (OuterVolumeSpecName: "utilities") pod "544e38ca-9cdb-4ca1-82b9-dd6290b12428" (UID: "544e38ca-9cdb-4ca1-82b9-dd6290b12428"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:21:47 crc kubenswrapper[4593]: I0129 12:21:47.932039 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/544e38ca-9cdb-4ca1-82b9-dd6290b12428-kube-api-access-7cx7z" (OuterVolumeSpecName: "kube-api-access-7cx7z") pod "544e38ca-9cdb-4ca1-82b9-dd6290b12428" (UID: "544e38ca-9cdb-4ca1-82b9-dd6290b12428"). InnerVolumeSpecName "kube-api-access-7cx7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:21:48 crc kubenswrapper[4593]: I0129 12:21:48.027210 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cx7z\" (UniqueName: \"kubernetes.io/projected/544e38ca-9cdb-4ca1-82b9-dd6290b12428-kube-api-access-7cx7z\") on node \"crc\" DevicePath \"\"" Jan 29 12:21:48 crc kubenswrapper[4593]: I0129 12:21:48.027246 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/544e38ca-9cdb-4ca1-82b9-dd6290b12428-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:21:48 crc kubenswrapper[4593]: I0129 12:21:48.146261 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fhwxm" event={"ID":"544e38ca-9cdb-4ca1-82b9-dd6290b12428","Type":"ContainerDied","Data":"e20ec106468d262fa4bc5b0870a4ccc7cc66d00dbc9cc0aea978c890696a3eae"} Jan 29 12:21:48 crc kubenswrapper[4593]: I0129 12:21:48.146325 4593 scope.go:117] "RemoveContainer" containerID="15bd3dfc93df9578c6c17be7ac613b236f76f8886d45c782d3038d688f30e718" Jan 29 12:21:48 crc kubenswrapper[4593]: I0129 12:21:48.147045 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:48 crc kubenswrapper[4593]: I0129 12:21:48.167791 4593 scope.go:117] "RemoveContainer" containerID="7d1ea073d8cea1ae501e5f4b6fc119c0435003af8966c47bea4400a1082dae38" Jan 29 12:21:48 crc kubenswrapper[4593]: I0129 12:21:48.654875 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/544e38ca-9cdb-4ca1-82b9-dd6290b12428-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "544e38ca-9cdb-4ca1-82b9-dd6290b12428" (UID: "544e38ca-9cdb-4ca1-82b9-dd6290b12428"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:21:48 crc kubenswrapper[4593]: I0129 12:21:48.690223 4593 scope.go:117] "RemoveContainer" containerID="4e2df6b4721a8d473a96f101f336863a5ef2eb9c2ef8535919425422543b4bda" Jan 29 12:21:48 crc kubenswrapper[4593]: I0129 12:21:48.741012 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/544e38ca-9cdb-4ca1-82b9-dd6290b12428-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:21:48 crc kubenswrapper[4593]: I0129 12:21:48.785149 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fhwxm"] Jan 29 12:21:48 crc kubenswrapper[4593]: I0129 12:21:48.795258 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fhwxm"] Jan 29 12:21:49 crc kubenswrapper[4593]: I0129 12:21:49.085989 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="544e38ca-9cdb-4ca1-82b9-dd6290b12428" path="/var/lib/kubelet/pods/544e38ca-9cdb-4ca1-82b9-dd6290b12428/volumes" Jan 29 12:22:17 crc kubenswrapper[4593]: I0129 12:22:17.386135 4593 scope.go:117] "RemoveContainer" containerID="54ccd1935e3e2e3e59738afad3c9d5c99134092f1b5fc8efa7667569d5fe3894" Jan 29 12:22:33 crc kubenswrapper[4593]: I0129 12:22:33.946177 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:22:33 crc kubenswrapper[4593]: I0129 12:22:33.946812 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:23:03 crc kubenswrapper[4593]: I0129 12:23:03.958592 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:23:03 crc kubenswrapper[4593]: I0129 12:23:03.959278 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:23:33 crc kubenswrapper[4593]: I0129 12:23:33.945800 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:23:33 crc kubenswrapper[4593]: I0129 12:23:33.946392 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:23:33 crc kubenswrapper[4593]: I0129 12:23:33.946453 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 12:23:33 crc kubenswrapper[4593]: I0129 12:23:33.947275 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0c951b718f5f8a81543c1227b8e681ac1add853c973a503786430be2a5132d27"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:23:33 crc kubenswrapper[4593]: I0129 12:23:33.947329 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://0c951b718f5f8a81543c1227b8e681ac1add853c973a503786430be2a5132d27" gracePeriod=600 Jan 29 12:23:34 crc kubenswrapper[4593]: I0129 12:23:34.133945 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="0c951b718f5f8a81543c1227b8e681ac1add853c973a503786430be2a5132d27" exitCode=0 Jan 29 12:23:34 crc kubenswrapper[4593]: I0129 12:23:34.134006 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"0c951b718f5f8a81543c1227b8e681ac1add853c973a503786430be2a5132d27"} Jan 29 12:23:34 crc kubenswrapper[4593]: I0129 12:23:34.134045 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:23:35 crc kubenswrapper[4593]: I0129 12:23:35.144188 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f"} Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.187357 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gh6r5"] Jan 29 12:24:09 crc kubenswrapper[4593]: E0129 12:24:09.188361 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544e38ca-9cdb-4ca1-82b9-dd6290b12428" containerName="extract-content" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.188380 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="544e38ca-9cdb-4ca1-82b9-dd6290b12428" containerName="extract-content" Jan 29 12:24:09 crc kubenswrapper[4593]: E0129 12:24:09.188395 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="006cda43-0b58-4970-bcf0-c355509620f8" containerName="copy" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.188402 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="006cda43-0b58-4970-bcf0-c355509620f8" containerName="copy" Jan 29 12:24:09 crc kubenswrapper[4593]: E0129 12:24:09.188418 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544e38ca-9cdb-4ca1-82b9-dd6290b12428" containerName="registry-server" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.188423 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="544e38ca-9cdb-4ca1-82b9-dd6290b12428" containerName="registry-server" Jan 29 12:24:09 crc kubenswrapper[4593]: E0129 12:24:09.188438 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="006cda43-0b58-4970-bcf0-c355509620f8" containerName="gather" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.188443 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="006cda43-0b58-4970-bcf0-c355509620f8" containerName="gather" Jan 29 12:24:09 crc kubenswrapper[4593]: E0129 12:24:09.188453 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544e38ca-9cdb-4ca1-82b9-dd6290b12428" containerName="extract-utilities" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.188459 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="544e38ca-9cdb-4ca1-82b9-dd6290b12428" containerName="extract-utilities" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.188724 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="006cda43-0b58-4970-bcf0-c355509620f8" containerName="copy" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.188744 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="006cda43-0b58-4970-bcf0-c355509620f8" containerName="gather" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.188758 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="544e38ca-9cdb-4ca1-82b9-dd6290b12428" containerName="registry-server" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.190215 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.203748 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gh6r5"] Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.261380 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37487459-95b3-4700-85d3-8eae3d218459-utilities\") pod \"redhat-marketplace-gh6r5\" (UID: \"37487459-95b3-4700-85d3-8eae3d218459\") " pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.261489 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37487459-95b3-4700-85d3-8eae3d218459-catalog-content\") pod \"redhat-marketplace-gh6r5\" (UID: \"37487459-95b3-4700-85d3-8eae3d218459\") " pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.261808 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxw44\" (UniqueName: \"kubernetes.io/projected/37487459-95b3-4700-85d3-8eae3d218459-kube-api-access-cxw44\") pod \"redhat-marketplace-gh6r5\" (UID: \"37487459-95b3-4700-85d3-8eae3d218459\") " pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.363393 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37487459-95b3-4700-85d3-8eae3d218459-catalog-content\") pod \"redhat-marketplace-gh6r5\" (UID: \"37487459-95b3-4700-85d3-8eae3d218459\") " pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.364063 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxw44\" (UniqueName: \"kubernetes.io/projected/37487459-95b3-4700-85d3-8eae3d218459-kube-api-access-cxw44\") pod \"redhat-marketplace-gh6r5\" (UID: \"37487459-95b3-4700-85d3-8eae3d218459\") " pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.364108 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37487459-95b3-4700-85d3-8eae3d218459-catalog-content\") pod \"redhat-marketplace-gh6r5\" (UID: \"37487459-95b3-4700-85d3-8eae3d218459\") " pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.364139 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37487459-95b3-4700-85d3-8eae3d218459-utilities\") pod \"redhat-marketplace-gh6r5\" (UID: \"37487459-95b3-4700-85d3-8eae3d218459\") " pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.364573 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37487459-95b3-4700-85d3-8eae3d218459-utilities\") pod \"redhat-marketplace-gh6r5\" (UID: \"37487459-95b3-4700-85d3-8eae3d218459\") " pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.868174 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxw44\" (UniqueName: \"kubernetes.io/projected/37487459-95b3-4700-85d3-8eae3d218459-kube-api-access-cxw44\") pod \"redhat-marketplace-gh6r5\" (UID: \"37487459-95b3-4700-85d3-8eae3d218459\") " pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:10 crc kubenswrapper[4593]: I0129 12:24:10.117322 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:10 crc kubenswrapper[4593]: I0129 12:24:10.578877 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gh6r5"] Jan 29 12:24:11 crc kubenswrapper[4593]: I0129 12:24:11.510858 4593 generic.go:334] "Generic (PLEG): container finished" podID="37487459-95b3-4700-85d3-8eae3d218459" containerID="16123fd659218b3e8a0deecd934f827d98be0eb2152755b56cc90cf8cf2148e7" exitCode=0 Jan 29 12:24:11 crc kubenswrapper[4593]: I0129 12:24:11.511224 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gh6r5" event={"ID":"37487459-95b3-4700-85d3-8eae3d218459","Type":"ContainerDied","Data":"16123fd659218b3e8a0deecd934f827d98be0eb2152755b56cc90cf8cf2148e7"} Jan 29 12:24:11 crc kubenswrapper[4593]: I0129 12:24:11.511274 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gh6r5" event={"ID":"37487459-95b3-4700-85d3-8eae3d218459","Type":"ContainerStarted","Data":"d4277f5a84556bab91331ef8c9c210c90b196f2deb075bbaeb81e6199c759bee"} Jan 29 12:24:11 crc kubenswrapper[4593]: I0129 12:24:11.515328 4593 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 12:24:13 crc kubenswrapper[4593]: I0129 12:24:13.579213 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gh6r5" event={"ID":"37487459-95b3-4700-85d3-8eae3d218459","Type":"ContainerStarted","Data":"0650d061520731e0c2ff467cec3ad7f7b669bf60c95dcf416854747e15c07d24"} Jan 29 12:24:14 crc kubenswrapper[4593]: I0129 12:24:14.591594 4593 generic.go:334] "Generic (PLEG): container finished" podID="37487459-95b3-4700-85d3-8eae3d218459" containerID="0650d061520731e0c2ff467cec3ad7f7b669bf60c95dcf416854747e15c07d24" exitCode=0 Jan 29 12:24:14 crc kubenswrapper[4593]: I0129 12:24:14.591685 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gh6r5" event={"ID":"37487459-95b3-4700-85d3-8eae3d218459","Type":"ContainerDied","Data":"0650d061520731e0c2ff467cec3ad7f7b669bf60c95dcf416854747e15c07d24"} Jan 29 12:24:15 crc kubenswrapper[4593]: I0129 12:24:15.607462 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gh6r5" event={"ID":"37487459-95b3-4700-85d3-8eae3d218459","Type":"ContainerStarted","Data":"325a85e9886c75ed2187dc83272bbd450c195c3e493c5fe74506903d56e3e96e"} Jan 29 12:24:15 crc kubenswrapper[4593]: I0129 12:24:15.641521 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gh6r5" podStartSLOduration=2.954399756 podStartE2EDuration="6.641493715s" podCreationTimestamp="2026-01-29 12:24:09 +0000 UTC" firstStartedPulling="2026-01-29 12:24:11.513675139 +0000 UTC m=+5117.386709330" lastFinishedPulling="2026-01-29 12:24:15.200769098 +0000 UTC m=+5121.073803289" observedRunningTime="2026-01-29 12:24:15.62361359 +0000 UTC m=+5121.496647781" watchObservedRunningTime="2026-01-29 12:24:15.641493715 +0000 UTC m=+5121.514527906" Jan 29 12:24:17 crc kubenswrapper[4593]: I0129 12:24:17.404589 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fsx2j"] Jan 29 12:24:17 crc kubenswrapper[4593]: I0129 12:24:17.406855 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:17 crc kubenswrapper[4593]: I0129 12:24:17.426233 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fsx2j"] Jan 29 12:24:17 crc kubenswrapper[4593]: I0129 12:24:17.477056 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-utilities\") pod \"community-operators-fsx2j\" (UID: \"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b\") " pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:17 crc kubenswrapper[4593]: I0129 12:24:17.477375 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-catalog-content\") pod \"community-operators-fsx2j\" (UID: \"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b\") " pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:17 crc kubenswrapper[4593]: I0129 12:24:17.477678 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhwr2\" (UniqueName: \"kubernetes.io/projected/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-kube-api-access-dhwr2\") pod \"community-operators-fsx2j\" (UID: \"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b\") " pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:17 crc kubenswrapper[4593]: I0129 12:24:17.579020 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhwr2\" (UniqueName: \"kubernetes.io/projected/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-kube-api-access-dhwr2\") pod \"community-operators-fsx2j\" (UID: \"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b\") " pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:17 crc kubenswrapper[4593]: I0129 12:24:17.579086 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-utilities\") pod \"community-operators-fsx2j\" (UID: \"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b\") " pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:17 crc kubenswrapper[4593]: I0129 12:24:17.579131 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-catalog-content\") pod \"community-operators-fsx2j\" (UID: \"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b\") " pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:17 crc kubenswrapper[4593]: I0129 12:24:17.579824 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-catalog-content\") pod \"community-operators-fsx2j\" (UID: \"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b\") " pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:17 crc kubenswrapper[4593]: I0129 12:24:17.579921 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-utilities\") pod \"community-operators-fsx2j\" (UID: \"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b\") " pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:17 crc kubenswrapper[4593]: I0129 12:24:17.602268 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhwr2\" (UniqueName: \"kubernetes.io/projected/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-kube-api-access-dhwr2\") pod \"community-operators-fsx2j\" (UID: \"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b\") " pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:17 crc kubenswrapper[4593]: I0129 12:24:17.728232 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:18 crc kubenswrapper[4593]: I0129 12:24:18.356675 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fsx2j"] Jan 29 12:24:18 crc kubenswrapper[4593]: I0129 12:24:18.632969 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsx2j" event={"ID":"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b","Type":"ContainerStarted","Data":"209ffbcc1a678a7d65c8310cd83d69a1db8590a0079496bbe454339367ab236f"} Jan 29 12:24:19 crc kubenswrapper[4593]: I0129 12:24:19.644218 4593 generic.go:334] "Generic (PLEG): container finished" podID="89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b" containerID="4760d4b4efd54e9f0d81dab92eeb29247ea63508178f867c550999b4c73786ff" exitCode=0 Jan 29 12:24:19 crc kubenswrapper[4593]: I0129 12:24:19.644275 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsx2j" event={"ID":"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b","Type":"ContainerDied","Data":"4760d4b4efd54e9f0d81dab92eeb29247ea63508178f867c550999b4c73786ff"} Jan 29 12:24:20 crc kubenswrapper[4593]: I0129 12:24:20.118189 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:20 crc kubenswrapper[4593]: I0129 12:24:20.118551 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:20 crc kubenswrapper[4593]: I0129 12:24:20.171306 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:20 crc kubenswrapper[4593]: I0129 12:24:20.708673 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:21 crc kubenswrapper[4593]: I0129 12:24:21.664884 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsx2j" event={"ID":"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b","Type":"ContainerStarted","Data":"86c5848a86e7335e43980646d8799a5669e1e2b3ee0212764f28168ba1b6a030"} Jan 29 12:24:21 crc kubenswrapper[4593]: I0129 12:24:21.783049 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gh6r5"] Jan 29 12:24:22 crc kubenswrapper[4593]: I0129 12:24:22.673766 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gh6r5" podUID="37487459-95b3-4700-85d3-8eae3d218459" containerName="registry-server" containerID="cri-o://325a85e9886c75ed2187dc83272bbd450c195c3e493c5fe74506903d56e3e96e" gracePeriod=2 Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.637023 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.688489 4593 generic.go:334] "Generic (PLEG): container finished" podID="37487459-95b3-4700-85d3-8eae3d218459" containerID="325a85e9886c75ed2187dc83272bbd450c195c3e493c5fe74506903d56e3e96e" exitCode=0 Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.688541 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gh6r5" event={"ID":"37487459-95b3-4700-85d3-8eae3d218459","Type":"ContainerDied","Data":"325a85e9886c75ed2187dc83272bbd450c195c3e493c5fe74506903d56e3e96e"} Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.689541 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gh6r5" event={"ID":"37487459-95b3-4700-85d3-8eae3d218459","Type":"ContainerDied","Data":"d4277f5a84556bab91331ef8c9c210c90b196f2deb075bbaeb81e6199c759bee"} Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.688591 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.689672 4593 scope.go:117] "RemoveContainer" containerID="325a85e9886c75ed2187dc83272bbd450c195c3e493c5fe74506903d56e3e96e" Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.777435 4593 scope.go:117] "RemoveContainer" containerID="0650d061520731e0c2ff467cec3ad7f7b669bf60c95dcf416854747e15c07d24" Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.793109 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37487459-95b3-4700-85d3-8eae3d218459-catalog-content\") pod \"37487459-95b3-4700-85d3-8eae3d218459\" (UID: \"37487459-95b3-4700-85d3-8eae3d218459\") " Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.793417 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxw44\" (UniqueName: \"kubernetes.io/projected/37487459-95b3-4700-85d3-8eae3d218459-kube-api-access-cxw44\") pod \"37487459-95b3-4700-85d3-8eae3d218459\" (UID: \"37487459-95b3-4700-85d3-8eae3d218459\") " Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.793540 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37487459-95b3-4700-85d3-8eae3d218459-utilities\") pod \"37487459-95b3-4700-85d3-8eae3d218459\" (UID: \"37487459-95b3-4700-85d3-8eae3d218459\") " Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.795477 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37487459-95b3-4700-85d3-8eae3d218459-utilities" (OuterVolumeSpecName: "utilities") pod "37487459-95b3-4700-85d3-8eae3d218459" (UID: "37487459-95b3-4700-85d3-8eae3d218459"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.803425 4593 scope.go:117] "RemoveContainer" containerID="16123fd659218b3e8a0deecd934f827d98be0eb2152755b56cc90cf8cf2148e7" Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.812174 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37487459-95b3-4700-85d3-8eae3d218459-kube-api-access-cxw44" (OuterVolumeSpecName: "kube-api-access-cxw44") pod "37487459-95b3-4700-85d3-8eae3d218459" (UID: "37487459-95b3-4700-85d3-8eae3d218459"). InnerVolumeSpecName "kube-api-access-cxw44". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.825329 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37487459-95b3-4700-85d3-8eae3d218459-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37487459-95b3-4700-85d3-8eae3d218459" (UID: "37487459-95b3-4700-85d3-8eae3d218459"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.894888 4593 scope.go:117] "RemoveContainer" containerID="325a85e9886c75ed2187dc83272bbd450c195c3e493c5fe74506903d56e3e96e" Jan 29 12:24:23 crc kubenswrapper[4593]: E0129 12:24:23.895470 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"325a85e9886c75ed2187dc83272bbd450c195c3e493c5fe74506903d56e3e96e\": container with ID starting with 325a85e9886c75ed2187dc83272bbd450c195c3e493c5fe74506903d56e3e96e not found: ID does not exist" containerID="325a85e9886c75ed2187dc83272bbd450c195c3e493c5fe74506903d56e3e96e" Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.895508 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"325a85e9886c75ed2187dc83272bbd450c195c3e493c5fe74506903d56e3e96e"} err="failed to get container status \"325a85e9886c75ed2187dc83272bbd450c195c3e493c5fe74506903d56e3e96e\": rpc error: code = NotFound desc = could not find container \"325a85e9886c75ed2187dc83272bbd450c195c3e493c5fe74506903d56e3e96e\": container with ID starting with 325a85e9886c75ed2187dc83272bbd450c195c3e493c5fe74506903d56e3e96e not found: ID does not exist" Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.895534 4593 scope.go:117] "RemoveContainer" containerID="0650d061520731e0c2ff467cec3ad7f7b669bf60c95dcf416854747e15c07d24" Jan 29 12:24:23 crc kubenswrapper[4593]: E0129 12:24:23.896043 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0650d061520731e0c2ff467cec3ad7f7b669bf60c95dcf416854747e15c07d24\": container with ID starting with 0650d061520731e0c2ff467cec3ad7f7b669bf60c95dcf416854747e15c07d24 not found: ID does not exist" containerID="0650d061520731e0c2ff467cec3ad7f7b669bf60c95dcf416854747e15c07d24" Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.896160 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0650d061520731e0c2ff467cec3ad7f7b669bf60c95dcf416854747e15c07d24"} err="failed to get container status \"0650d061520731e0c2ff467cec3ad7f7b669bf60c95dcf416854747e15c07d24\": rpc error: code = NotFound desc = could not find container \"0650d061520731e0c2ff467cec3ad7f7b669bf60c95dcf416854747e15c07d24\": container with ID starting with 0650d061520731e0c2ff467cec3ad7f7b669bf60c95dcf416854747e15c07d24 not found: ID does not exist" Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.896259 4593 scope.go:117] "RemoveContainer" containerID="16123fd659218b3e8a0deecd934f827d98be0eb2152755b56cc90cf8cf2148e7" Jan 29 12:24:23 crc kubenswrapper[4593]: E0129 12:24:23.896606 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16123fd659218b3e8a0deecd934f827d98be0eb2152755b56cc90cf8cf2148e7\": container with ID starting with 16123fd659218b3e8a0deecd934f827d98be0eb2152755b56cc90cf8cf2148e7 not found: ID does not exist" containerID="16123fd659218b3e8a0deecd934f827d98be0eb2152755b56cc90cf8cf2148e7" Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.896657 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16123fd659218b3e8a0deecd934f827d98be0eb2152755b56cc90cf8cf2148e7"} err="failed to get container status \"16123fd659218b3e8a0deecd934f827d98be0eb2152755b56cc90cf8cf2148e7\": rpc error: code = NotFound desc = could not find container \"16123fd659218b3e8a0deecd934f827d98be0eb2152755b56cc90cf8cf2148e7\": container with ID starting with 16123fd659218b3e8a0deecd934f827d98be0eb2152755b56cc90cf8cf2148e7 not found: ID does not exist" Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.897402 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37487459-95b3-4700-85d3-8eae3d218459-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.897486 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxw44\" (UniqueName: \"kubernetes.io/projected/37487459-95b3-4700-85d3-8eae3d218459-kube-api-access-cxw44\") on node \"crc\" DevicePath \"\"" Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.897555 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37487459-95b3-4700-85d3-8eae3d218459-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:24:24 crc kubenswrapper[4593]: I0129 12:24:24.032766 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gh6r5"] Jan 29 12:24:24 crc kubenswrapper[4593]: I0129 12:24:24.041499 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gh6r5"] Jan 29 12:24:24 crc kubenswrapper[4593]: I0129 12:24:24.703071 4593 generic.go:334] "Generic (PLEG): container finished" podID="89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b" containerID="86c5848a86e7335e43980646d8799a5669e1e2b3ee0212764f28168ba1b6a030" exitCode=0 Jan 29 12:24:24 crc kubenswrapper[4593]: I0129 12:24:24.703139 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsx2j" event={"ID":"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b","Type":"ContainerDied","Data":"86c5848a86e7335e43980646d8799a5669e1e2b3ee0212764f28168ba1b6a030"} Jan 29 12:24:25 crc kubenswrapper[4593]: I0129 12:24:25.085570 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37487459-95b3-4700-85d3-8eae3d218459" path="/var/lib/kubelet/pods/37487459-95b3-4700-85d3-8eae3d218459/volumes" Jan 29 12:24:25 crc kubenswrapper[4593]: I0129 12:24:25.726110 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsx2j" event={"ID":"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b","Type":"ContainerStarted","Data":"4d1e3d0e4ee577844e8f8b5547aa7cf41a4f58c10456741876dcbf00c6529124"} Jan 29 12:24:25 crc kubenswrapper[4593]: I0129 12:24:25.751675 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fsx2j" podStartSLOduration=3.186550892 podStartE2EDuration="8.751656429s" podCreationTimestamp="2026-01-29 12:24:17 +0000 UTC" firstStartedPulling="2026-01-29 12:24:19.65197176 +0000 UTC m=+5125.525005951" lastFinishedPulling="2026-01-29 12:24:25.217077297 +0000 UTC m=+5131.090111488" observedRunningTime="2026-01-29 12:24:25.744623348 +0000 UTC m=+5131.617657539" watchObservedRunningTime="2026-01-29 12:24:25.751656429 +0000 UTC m=+5131.624690620" Jan 29 12:24:27 crc kubenswrapper[4593]: I0129 12:24:27.728779 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:27 crc kubenswrapper[4593]: I0129 12:24:27.729243 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:27 crc kubenswrapper[4593]: I0129 12:24:27.779625 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:28 crc kubenswrapper[4593]: I0129 12:24:28.841728 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dw4s4/must-gather-vjpbp"] Jan 29 12:24:28 crc kubenswrapper[4593]: E0129 12:24:28.842423 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37487459-95b3-4700-85d3-8eae3d218459" containerName="registry-server" Jan 29 12:24:28 crc kubenswrapper[4593]: I0129 12:24:28.842439 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="37487459-95b3-4700-85d3-8eae3d218459" containerName="registry-server" Jan 29 12:24:28 crc kubenswrapper[4593]: E0129 12:24:28.842469 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37487459-95b3-4700-85d3-8eae3d218459" containerName="extract-utilities" Jan 29 12:24:28 crc kubenswrapper[4593]: I0129 12:24:28.842476 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="37487459-95b3-4700-85d3-8eae3d218459" containerName="extract-utilities" Jan 29 12:24:28 crc kubenswrapper[4593]: E0129 12:24:28.842495 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37487459-95b3-4700-85d3-8eae3d218459" containerName="extract-content" Jan 29 12:24:28 crc kubenswrapper[4593]: I0129 12:24:28.842502 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="37487459-95b3-4700-85d3-8eae3d218459" containerName="extract-content" Jan 29 12:24:28 crc kubenswrapper[4593]: I0129 12:24:28.842749 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="37487459-95b3-4700-85d3-8eae3d218459" containerName="registry-server" Jan 29 12:24:28 crc kubenswrapper[4593]: I0129 12:24:28.845763 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dw4s4/must-gather-vjpbp" Jan 29 12:24:28 crc kubenswrapper[4593]: I0129 12:24:28.869243 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-dw4s4"/"openshift-service-ca.crt" Jan 29 12:24:28 crc kubenswrapper[4593]: I0129 12:24:28.869243 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-dw4s4"/"kube-root-ca.crt" Jan 29 12:24:28 crc kubenswrapper[4593]: I0129 12:24:28.892505 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dw4s4/must-gather-vjpbp"] Jan 29 12:24:28 crc kubenswrapper[4593]: I0129 12:24:28.954689 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mslvw\" (UniqueName: \"kubernetes.io/projected/65f07111-44a8-402c-887e-fb65ab51a2ba-kube-api-access-mslvw\") pod \"must-gather-vjpbp\" (UID: \"65f07111-44a8-402c-887e-fb65ab51a2ba\") " pod="openshift-must-gather-dw4s4/must-gather-vjpbp" Jan 29 12:24:28 crc kubenswrapper[4593]: I0129 12:24:28.954775 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/65f07111-44a8-402c-887e-fb65ab51a2ba-must-gather-output\") pod \"must-gather-vjpbp\" (UID: \"65f07111-44a8-402c-887e-fb65ab51a2ba\") " pod="openshift-must-gather-dw4s4/must-gather-vjpbp" Jan 29 12:24:29 crc kubenswrapper[4593]: I0129 12:24:29.056253 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mslvw\" (UniqueName: \"kubernetes.io/projected/65f07111-44a8-402c-887e-fb65ab51a2ba-kube-api-access-mslvw\") pod \"must-gather-vjpbp\" (UID: \"65f07111-44a8-402c-887e-fb65ab51a2ba\") " pod="openshift-must-gather-dw4s4/must-gather-vjpbp" Jan 29 12:24:29 crc kubenswrapper[4593]: I0129 12:24:29.056341 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/65f07111-44a8-402c-887e-fb65ab51a2ba-must-gather-output\") pod \"must-gather-vjpbp\" (UID: \"65f07111-44a8-402c-887e-fb65ab51a2ba\") " pod="openshift-must-gather-dw4s4/must-gather-vjpbp" Jan 29 12:24:29 crc kubenswrapper[4593]: I0129 12:24:29.057020 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/65f07111-44a8-402c-887e-fb65ab51a2ba-must-gather-output\") pod \"must-gather-vjpbp\" (UID: \"65f07111-44a8-402c-887e-fb65ab51a2ba\") " pod="openshift-must-gather-dw4s4/must-gather-vjpbp" Jan 29 12:24:29 crc kubenswrapper[4593]: I0129 12:24:29.116361 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mslvw\" (UniqueName: \"kubernetes.io/projected/65f07111-44a8-402c-887e-fb65ab51a2ba-kube-api-access-mslvw\") pod \"must-gather-vjpbp\" (UID: \"65f07111-44a8-402c-887e-fb65ab51a2ba\") " pod="openshift-must-gather-dw4s4/must-gather-vjpbp" Jan 29 12:24:29 crc kubenswrapper[4593]: I0129 12:24:29.165071 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dw4s4/must-gather-vjpbp" Jan 29 12:24:29 crc kubenswrapper[4593]: I0129 12:24:29.770080 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="c1755998-9149-49be-b10f-c4fe029728bc" containerName="galera" probeResult="failure" output="command timed out" Jan 29 12:24:29 crc kubenswrapper[4593]: I0129 12:24:29.802065 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="c1755998-9149-49be-b10f-c4fe029728bc" containerName="galera" probeResult="failure" output="command timed out" Jan 29 12:24:29 crc kubenswrapper[4593]: I0129 12:24:29.961583 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dw4s4/must-gather-vjpbp"] Jan 29 12:24:29 crc kubenswrapper[4593]: W0129 12:24:29.984983 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65f07111_44a8_402c_887e_fb65ab51a2ba.slice/crio-245594f7dafbd456f724f1376fd10ae6d87a34162aa2ea7de6b153cdc54c71cd WatchSource:0}: Error finding container 245594f7dafbd456f724f1376fd10ae6d87a34162aa2ea7de6b153cdc54c71cd: Status 404 returned error can't find the container with id 245594f7dafbd456f724f1376fd10ae6d87a34162aa2ea7de6b153cdc54c71cd Jan 29 12:24:30 crc kubenswrapper[4593]: I0129 12:24:30.802306 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dw4s4/must-gather-vjpbp" event={"ID":"65f07111-44a8-402c-887e-fb65ab51a2ba","Type":"ContainerStarted","Data":"1feec9852be62edc7f198220f764a5c74cb5410083acfe510ab8aa789824da8a"} Jan 29 12:24:30 crc kubenswrapper[4593]: I0129 12:24:30.802671 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dw4s4/must-gather-vjpbp" event={"ID":"65f07111-44a8-402c-887e-fb65ab51a2ba","Type":"ContainerStarted","Data":"de71b4032d10072bd82e38895c6203cec0fc48ffa350c02731e705e0242d4fee"} Jan 29 12:24:30 crc kubenswrapper[4593]: I0129 12:24:30.802686 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dw4s4/must-gather-vjpbp" event={"ID":"65f07111-44a8-402c-887e-fb65ab51a2ba","Type":"ContainerStarted","Data":"245594f7dafbd456f724f1376fd10ae6d87a34162aa2ea7de6b153cdc54c71cd"} Jan 29 12:24:30 crc kubenswrapper[4593]: I0129 12:24:30.830586 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-dw4s4/must-gather-vjpbp" podStartSLOduration=2.830557455 podStartE2EDuration="2.830557455s" podCreationTimestamp="2026-01-29 12:24:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:24:30.82779531 +0000 UTC m=+5136.700829511" watchObservedRunningTime="2026-01-29 12:24:30.830557455 +0000 UTC m=+5136.703591656" Jan 29 12:24:34 crc kubenswrapper[4593]: I0129 12:24:34.878520 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dw4s4/crc-debug-mlk67"] Jan 29 12:24:34 crc kubenswrapper[4593]: I0129 12:24:34.880330 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dw4s4/crc-debug-mlk67" Jan 29 12:24:34 crc kubenswrapper[4593]: I0129 12:24:34.883756 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-dw4s4"/"default-dockercfg-gg8rn" Jan 29 12:24:35 crc kubenswrapper[4593]: I0129 12:24:35.016117 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/21818d64-20a5-4483-8f13-919b612d1007-host\") pod \"crc-debug-mlk67\" (UID: \"21818d64-20a5-4483-8f13-919b612d1007\") " pod="openshift-must-gather-dw4s4/crc-debug-mlk67" Jan 29 12:24:35 crc kubenswrapper[4593]: I0129 12:24:35.016257 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db66w\" (UniqueName: \"kubernetes.io/projected/21818d64-20a5-4483-8f13-919b612d1007-kube-api-access-db66w\") pod \"crc-debug-mlk67\" (UID: \"21818d64-20a5-4483-8f13-919b612d1007\") " pod="openshift-must-gather-dw4s4/crc-debug-mlk67" Jan 29 12:24:35 crc kubenswrapper[4593]: I0129 12:24:35.118904 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/21818d64-20a5-4483-8f13-919b612d1007-host\") pod \"crc-debug-mlk67\" (UID: \"21818d64-20a5-4483-8f13-919b612d1007\") " pod="openshift-must-gather-dw4s4/crc-debug-mlk67" Jan 29 12:24:35 crc kubenswrapper[4593]: I0129 12:24:35.119122 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/21818d64-20a5-4483-8f13-919b612d1007-host\") pod \"crc-debug-mlk67\" (UID: \"21818d64-20a5-4483-8f13-919b612d1007\") " pod="openshift-must-gather-dw4s4/crc-debug-mlk67" Jan 29 12:24:35 crc kubenswrapper[4593]: I0129 12:24:35.119156 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db66w\" (UniqueName: \"kubernetes.io/projected/21818d64-20a5-4483-8f13-919b612d1007-kube-api-access-db66w\") pod \"crc-debug-mlk67\" (UID: \"21818d64-20a5-4483-8f13-919b612d1007\") " pod="openshift-must-gather-dw4s4/crc-debug-mlk67" Jan 29 12:24:35 crc kubenswrapper[4593]: I0129 12:24:35.155572 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db66w\" (UniqueName: \"kubernetes.io/projected/21818d64-20a5-4483-8f13-919b612d1007-kube-api-access-db66w\") pod \"crc-debug-mlk67\" (UID: \"21818d64-20a5-4483-8f13-919b612d1007\") " pod="openshift-must-gather-dw4s4/crc-debug-mlk67" Jan 29 12:24:35 crc kubenswrapper[4593]: I0129 12:24:35.201075 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dw4s4/crc-debug-mlk67" Jan 29 12:24:35 crc kubenswrapper[4593]: W0129 12:24:35.259140 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21818d64_20a5_4483_8f13_919b612d1007.slice/crio-a06c829d257baf4355f4fb1cf267874c14b344384da541e9ef522804001315b0 WatchSource:0}: Error finding container a06c829d257baf4355f4fb1cf267874c14b344384da541e9ef522804001315b0: Status 404 returned error can't find the container with id a06c829d257baf4355f4fb1cf267874c14b344384da541e9ef522804001315b0 Jan 29 12:24:35 crc kubenswrapper[4593]: I0129 12:24:35.849157 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dw4s4/crc-debug-mlk67" event={"ID":"21818d64-20a5-4483-8f13-919b612d1007","Type":"ContainerStarted","Data":"b492a7dd406b0c27babd0f943ac62c7e59cd70af84483b5b682c1f16e22a9e9e"} Jan 29 12:24:35 crc kubenswrapper[4593]: I0129 12:24:35.849712 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dw4s4/crc-debug-mlk67" event={"ID":"21818d64-20a5-4483-8f13-919b612d1007","Type":"ContainerStarted","Data":"a06c829d257baf4355f4fb1cf267874c14b344384da541e9ef522804001315b0"} Jan 29 12:24:35 crc kubenswrapper[4593]: I0129 12:24:35.874901 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-dw4s4/crc-debug-mlk67" podStartSLOduration=1.874862275 podStartE2EDuration="1.874862275s" podCreationTimestamp="2026-01-29 12:24:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:24:35.862795068 +0000 UTC m=+5141.735829259" watchObservedRunningTime="2026-01-29 12:24:35.874862275 +0000 UTC m=+5141.747896456" Jan 29 12:24:37 crc kubenswrapper[4593]: I0129 12:24:37.788875 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:37 crc kubenswrapper[4593]: I0129 12:24:37.879600 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fsx2j"] Jan 29 12:24:37 crc kubenswrapper[4593]: I0129 12:24:37.879846 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fsx2j" podUID="89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b" containerName="registry-server" containerID="cri-o://4d1e3d0e4ee577844e8f8b5547aa7cf41a4f58c10456741876dcbf00c6529124" gracePeriod=2 Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.540761 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.655623 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-catalog-content\") pod \"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b\" (UID: \"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b\") " Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.655880 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-utilities\") pod \"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b\" (UID: \"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b\") " Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.655919 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhwr2\" (UniqueName: \"kubernetes.io/projected/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-kube-api-access-dhwr2\") pod \"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b\" (UID: \"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b\") " Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.656604 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-utilities" (OuterVolumeSpecName: "utilities") pod "89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b" (UID: "89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.670378 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-kube-api-access-dhwr2" (OuterVolumeSpecName: "kube-api-access-dhwr2") pod "89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b" (UID: "89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b"). InnerVolumeSpecName "kube-api-access-dhwr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.753942 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b" (UID: "89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.763563 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.763644 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.763663 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhwr2\" (UniqueName: \"kubernetes.io/projected/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-kube-api-access-dhwr2\") on node \"crc\" DevicePath \"\"" Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.883042 4593 generic.go:334] "Generic (PLEG): container finished" podID="89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b" containerID="4d1e3d0e4ee577844e8f8b5547aa7cf41a4f58c10456741876dcbf00c6529124" exitCode=0 Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.883094 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsx2j" event={"ID":"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b","Type":"ContainerDied","Data":"4d1e3d0e4ee577844e8f8b5547aa7cf41a4f58c10456741876dcbf00c6529124"} Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.883127 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsx2j" event={"ID":"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b","Type":"ContainerDied","Data":"209ffbcc1a678a7d65c8310cd83d69a1db8590a0079496bbe454339367ab236f"} Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.883152 4593 scope.go:117] "RemoveContainer" containerID="4d1e3d0e4ee577844e8f8b5547aa7cf41a4f58c10456741876dcbf00c6529124" Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.883312 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.916930 4593 scope.go:117] "RemoveContainer" containerID="86c5848a86e7335e43980646d8799a5669e1e2b3ee0212764f28168ba1b6a030" Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.922910 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fsx2j"] Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.933966 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fsx2j"] Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.981394 4593 scope.go:117] "RemoveContainer" containerID="4760d4b4efd54e9f0d81dab92eeb29247ea63508178f867c550999b4c73786ff" Jan 29 12:24:39 crc kubenswrapper[4593]: I0129 12:24:39.020384 4593 scope.go:117] "RemoveContainer" containerID="4d1e3d0e4ee577844e8f8b5547aa7cf41a4f58c10456741876dcbf00c6529124" Jan 29 12:24:39 crc kubenswrapper[4593]: E0129 12:24:39.022062 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d1e3d0e4ee577844e8f8b5547aa7cf41a4f58c10456741876dcbf00c6529124\": container with ID starting with 4d1e3d0e4ee577844e8f8b5547aa7cf41a4f58c10456741876dcbf00c6529124 not found: ID does not exist" containerID="4d1e3d0e4ee577844e8f8b5547aa7cf41a4f58c10456741876dcbf00c6529124" Jan 29 12:24:39 crc kubenswrapper[4593]: I0129 12:24:39.022107 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d1e3d0e4ee577844e8f8b5547aa7cf41a4f58c10456741876dcbf00c6529124"} err="failed to get container status \"4d1e3d0e4ee577844e8f8b5547aa7cf41a4f58c10456741876dcbf00c6529124\": rpc error: code = NotFound desc = could not find container \"4d1e3d0e4ee577844e8f8b5547aa7cf41a4f58c10456741876dcbf00c6529124\": container with ID starting with 4d1e3d0e4ee577844e8f8b5547aa7cf41a4f58c10456741876dcbf00c6529124 not found: ID does not exist" Jan 29 12:24:39 crc kubenswrapper[4593]: I0129 12:24:39.022137 4593 scope.go:117] "RemoveContainer" containerID="86c5848a86e7335e43980646d8799a5669e1e2b3ee0212764f28168ba1b6a030" Jan 29 12:24:39 crc kubenswrapper[4593]: E0129 12:24:39.033416 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86c5848a86e7335e43980646d8799a5669e1e2b3ee0212764f28168ba1b6a030\": container with ID starting with 86c5848a86e7335e43980646d8799a5669e1e2b3ee0212764f28168ba1b6a030 not found: ID does not exist" containerID="86c5848a86e7335e43980646d8799a5669e1e2b3ee0212764f28168ba1b6a030" Jan 29 12:24:39 crc kubenswrapper[4593]: I0129 12:24:39.033465 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86c5848a86e7335e43980646d8799a5669e1e2b3ee0212764f28168ba1b6a030"} err="failed to get container status \"86c5848a86e7335e43980646d8799a5669e1e2b3ee0212764f28168ba1b6a030\": rpc error: code = NotFound desc = could not find container \"86c5848a86e7335e43980646d8799a5669e1e2b3ee0212764f28168ba1b6a030\": container with ID starting with 86c5848a86e7335e43980646d8799a5669e1e2b3ee0212764f28168ba1b6a030 not found: ID does not exist" Jan 29 12:24:39 crc kubenswrapper[4593]: I0129 12:24:39.033496 4593 scope.go:117] "RemoveContainer" containerID="4760d4b4efd54e9f0d81dab92eeb29247ea63508178f867c550999b4c73786ff" Jan 29 12:24:39 crc kubenswrapper[4593]: E0129 12:24:39.036121 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4760d4b4efd54e9f0d81dab92eeb29247ea63508178f867c550999b4c73786ff\": container with ID starting with 4760d4b4efd54e9f0d81dab92eeb29247ea63508178f867c550999b4c73786ff not found: ID does not exist" containerID="4760d4b4efd54e9f0d81dab92eeb29247ea63508178f867c550999b4c73786ff" Jan 29 12:24:39 crc kubenswrapper[4593]: I0129 12:24:39.036178 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4760d4b4efd54e9f0d81dab92eeb29247ea63508178f867c550999b4c73786ff"} err="failed to get container status \"4760d4b4efd54e9f0d81dab92eeb29247ea63508178f867c550999b4c73786ff\": rpc error: code = NotFound desc = could not find container \"4760d4b4efd54e9f0d81dab92eeb29247ea63508178f867c550999b4c73786ff\": container with ID starting with 4760d4b4efd54e9f0d81dab92eeb29247ea63508178f867c550999b4c73786ff not found: ID does not exist" Jan 29 12:24:39 crc kubenswrapper[4593]: I0129 12:24:39.086980 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b" path="/var/lib/kubelet/pods/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b/volumes" Jan 29 12:25:24 crc kubenswrapper[4593]: I0129 12:25:24.413269 4593 generic.go:334] "Generic (PLEG): container finished" podID="21818d64-20a5-4483-8f13-919b612d1007" containerID="b492a7dd406b0c27babd0f943ac62c7e59cd70af84483b5b682c1f16e22a9e9e" exitCode=0 Jan 29 12:25:24 crc kubenswrapper[4593]: I0129 12:25:24.413483 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dw4s4/crc-debug-mlk67" event={"ID":"21818d64-20a5-4483-8f13-919b612d1007","Type":"ContainerDied","Data":"b492a7dd406b0c27babd0f943ac62c7e59cd70af84483b5b682c1f16e22a9e9e"} Jan 29 12:25:25 crc kubenswrapper[4593]: I0129 12:25:25.524842 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dw4s4/crc-debug-mlk67" Jan 29 12:25:25 crc kubenswrapper[4593]: I0129 12:25:25.560792 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dw4s4/crc-debug-mlk67"] Jan 29 12:25:25 crc kubenswrapper[4593]: I0129 12:25:25.571783 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dw4s4/crc-debug-mlk67"] Jan 29 12:25:25 crc kubenswrapper[4593]: I0129 12:25:25.647874 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/21818d64-20a5-4483-8f13-919b612d1007-host\") pod \"21818d64-20a5-4483-8f13-919b612d1007\" (UID: \"21818d64-20a5-4483-8f13-919b612d1007\") " Jan 29 12:25:25 crc kubenswrapper[4593]: I0129 12:25:25.648199 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db66w\" (UniqueName: \"kubernetes.io/projected/21818d64-20a5-4483-8f13-919b612d1007-kube-api-access-db66w\") pod \"21818d64-20a5-4483-8f13-919b612d1007\" (UID: \"21818d64-20a5-4483-8f13-919b612d1007\") " Jan 29 12:25:25 crc kubenswrapper[4593]: I0129 12:25:25.649401 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21818d64-20a5-4483-8f13-919b612d1007-host" (OuterVolumeSpecName: "host") pod "21818d64-20a5-4483-8f13-919b612d1007" (UID: "21818d64-20a5-4483-8f13-919b612d1007"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:25:25 crc kubenswrapper[4593]: I0129 12:25:25.654852 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21818d64-20a5-4483-8f13-919b612d1007-kube-api-access-db66w" (OuterVolumeSpecName: "kube-api-access-db66w") pod "21818d64-20a5-4483-8f13-919b612d1007" (UID: "21818d64-20a5-4483-8f13-919b612d1007"). InnerVolumeSpecName "kube-api-access-db66w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:25:25 crc kubenswrapper[4593]: I0129 12:25:25.761163 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-db66w\" (UniqueName: \"kubernetes.io/projected/21818d64-20a5-4483-8f13-919b612d1007-kube-api-access-db66w\") on node \"crc\" DevicePath \"\"" Jan 29 12:25:25 crc kubenswrapper[4593]: I0129 12:25:25.761209 4593 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/21818d64-20a5-4483-8f13-919b612d1007-host\") on node \"crc\" DevicePath \"\"" Jan 29 12:25:26 crc kubenswrapper[4593]: I0129 12:25:26.431156 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a06c829d257baf4355f4fb1cf267874c14b344384da541e9ef522804001315b0" Jan 29 12:25:26 crc kubenswrapper[4593]: I0129 12:25:26.431252 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dw4s4/crc-debug-mlk67" Jan 29 12:25:26 crc kubenswrapper[4593]: I0129 12:25:26.854229 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dw4s4/crc-debug-n8b5q"] Jan 29 12:25:26 crc kubenswrapper[4593]: E0129 12:25:26.854709 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b" containerName="extract-utilities" Jan 29 12:25:26 crc kubenswrapper[4593]: I0129 12:25:26.854734 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b" containerName="extract-utilities" Jan 29 12:25:26 crc kubenswrapper[4593]: E0129 12:25:26.854743 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b" containerName="registry-server" Jan 29 12:25:26 crc kubenswrapper[4593]: I0129 12:25:26.854749 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b" containerName="registry-server" Jan 29 12:25:26 crc kubenswrapper[4593]: E0129 12:25:26.854766 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b" containerName="extract-content" Jan 29 12:25:26 crc kubenswrapper[4593]: I0129 12:25:26.854772 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b" containerName="extract-content" Jan 29 12:25:26 crc kubenswrapper[4593]: E0129 12:25:26.854792 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21818d64-20a5-4483-8f13-919b612d1007" containerName="container-00" Jan 29 12:25:26 crc kubenswrapper[4593]: I0129 12:25:26.854797 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="21818d64-20a5-4483-8f13-919b612d1007" containerName="container-00" Jan 29 12:25:26 crc kubenswrapper[4593]: I0129 12:25:26.855014 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="21818d64-20a5-4483-8f13-919b612d1007" containerName="container-00" Jan 29 12:25:26 crc kubenswrapper[4593]: I0129 12:25:26.855031 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b" containerName="registry-server" Jan 29 12:25:26 crc kubenswrapper[4593]: I0129 12:25:26.855893 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dw4s4/crc-debug-n8b5q" Jan 29 12:25:26 crc kubenswrapper[4593]: I0129 12:25:26.858343 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-dw4s4"/"default-dockercfg-gg8rn" Jan 29 12:25:26 crc kubenswrapper[4593]: I0129 12:25:26.984123 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/16cd7214-5ee4-4072-a42a-9a51b9deea30-host\") pod \"crc-debug-n8b5q\" (UID: \"16cd7214-5ee4-4072-a42a-9a51b9deea30\") " pod="openshift-must-gather-dw4s4/crc-debug-n8b5q" Jan 29 12:25:26 crc kubenswrapper[4593]: I0129 12:25:26.984205 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfrz9\" (UniqueName: \"kubernetes.io/projected/16cd7214-5ee4-4072-a42a-9a51b9deea30-kube-api-access-wfrz9\") pod \"crc-debug-n8b5q\" (UID: \"16cd7214-5ee4-4072-a42a-9a51b9deea30\") " pod="openshift-must-gather-dw4s4/crc-debug-n8b5q" Jan 29 12:25:27 crc kubenswrapper[4593]: I0129 12:25:27.085748 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/16cd7214-5ee4-4072-a42a-9a51b9deea30-host\") pod \"crc-debug-n8b5q\" (UID: \"16cd7214-5ee4-4072-a42a-9a51b9deea30\") " pod="openshift-must-gather-dw4s4/crc-debug-n8b5q" Jan 29 12:25:27 crc kubenswrapper[4593]: I0129 12:25:27.085809 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfrz9\" (UniqueName: \"kubernetes.io/projected/16cd7214-5ee4-4072-a42a-9a51b9deea30-kube-api-access-wfrz9\") pod \"crc-debug-n8b5q\" (UID: \"16cd7214-5ee4-4072-a42a-9a51b9deea30\") " pod="openshift-must-gather-dw4s4/crc-debug-n8b5q" Jan 29 12:25:27 crc kubenswrapper[4593]: I0129 12:25:27.086143 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21818d64-20a5-4483-8f13-919b612d1007" path="/var/lib/kubelet/pods/21818d64-20a5-4483-8f13-919b612d1007/volumes" Jan 29 12:25:27 crc kubenswrapper[4593]: I0129 12:25:27.086349 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/16cd7214-5ee4-4072-a42a-9a51b9deea30-host\") pod \"crc-debug-n8b5q\" (UID: \"16cd7214-5ee4-4072-a42a-9a51b9deea30\") " pod="openshift-must-gather-dw4s4/crc-debug-n8b5q" Jan 29 12:25:27 crc kubenswrapper[4593]: I0129 12:25:27.109396 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfrz9\" (UniqueName: \"kubernetes.io/projected/16cd7214-5ee4-4072-a42a-9a51b9deea30-kube-api-access-wfrz9\") pod \"crc-debug-n8b5q\" (UID: \"16cd7214-5ee4-4072-a42a-9a51b9deea30\") " pod="openshift-must-gather-dw4s4/crc-debug-n8b5q" Jan 29 12:25:27 crc kubenswrapper[4593]: I0129 12:25:27.178503 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dw4s4/crc-debug-n8b5q" Jan 29 12:25:27 crc kubenswrapper[4593]: I0129 12:25:27.460550 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dw4s4/crc-debug-n8b5q" event={"ID":"16cd7214-5ee4-4072-a42a-9a51b9deea30","Type":"ContainerStarted","Data":"c863eb19aa45cab50d257db50f8ac6163ff5b0bbdf2c06af4d6b0e94e85d8801"} Jan 29 12:25:28 crc kubenswrapper[4593]: I0129 12:25:28.470709 4593 generic.go:334] "Generic (PLEG): container finished" podID="16cd7214-5ee4-4072-a42a-9a51b9deea30" containerID="1c377ca355fa720f0d286a362dd30108927c61a24acc46c9847397398d91107e" exitCode=0 Jan 29 12:25:28 crc kubenswrapper[4593]: I0129 12:25:28.470807 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dw4s4/crc-debug-n8b5q" event={"ID":"16cd7214-5ee4-4072-a42a-9a51b9deea30","Type":"ContainerDied","Data":"1c377ca355fa720f0d286a362dd30108927c61a24acc46c9847397398d91107e"} Jan 29 12:25:29 crc kubenswrapper[4593]: I0129 12:25:29.599044 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dw4s4/crc-debug-n8b5q" Jan 29 12:25:29 crc kubenswrapper[4593]: I0129 12:25:29.743808 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/16cd7214-5ee4-4072-a42a-9a51b9deea30-host\") pod \"16cd7214-5ee4-4072-a42a-9a51b9deea30\" (UID: \"16cd7214-5ee4-4072-a42a-9a51b9deea30\") " Jan 29 12:25:29 crc kubenswrapper[4593]: I0129 12:25:29.744197 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfrz9\" (UniqueName: \"kubernetes.io/projected/16cd7214-5ee4-4072-a42a-9a51b9deea30-kube-api-access-wfrz9\") pod \"16cd7214-5ee4-4072-a42a-9a51b9deea30\" (UID: \"16cd7214-5ee4-4072-a42a-9a51b9deea30\") " Jan 29 12:25:29 crc kubenswrapper[4593]: I0129 12:25:29.743936 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16cd7214-5ee4-4072-a42a-9a51b9deea30-host" (OuterVolumeSpecName: "host") pod "16cd7214-5ee4-4072-a42a-9a51b9deea30" (UID: "16cd7214-5ee4-4072-a42a-9a51b9deea30"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:25:29 crc kubenswrapper[4593]: I0129 12:25:29.761319 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16cd7214-5ee4-4072-a42a-9a51b9deea30-kube-api-access-wfrz9" (OuterVolumeSpecName: "kube-api-access-wfrz9") pod "16cd7214-5ee4-4072-a42a-9a51b9deea30" (UID: "16cd7214-5ee4-4072-a42a-9a51b9deea30"). InnerVolumeSpecName "kube-api-access-wfrz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:25:29 crc kubenswrapper[4593]: I0129 12:25:29.849121 4593 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/16cd7214-5ee4-4072-a42a-9a51b9deea30-host\") on node \"crc\" DevicePath \"\"" Jan 29 12:25:29 crc kubenswrapper[4593]: I0129 12:25:29.849378 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfrz9\" (UniqueName: \"kubernetes.io/projected/16cd7214-5ee4-4072-a42a-9a51b9deea30-kube-api-access-wfrz9\") on node \"crc\" DevicePath \"\"" Jan 29 12:25:30 crc kubenswrapper[4593]: I0129 12:25:30.221239 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dw4s4/crc-debug-n8b5q"] Jan 29 12:25:30 crc kubenswrapper[4593]: I0129 12:25:30.233255 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dw4s4/crc-debug-n8b5q"] Jan 29 12:25:30 crc kubenswrapper[4593]: I0129 12:25:30.493576 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c863eb19aa45cab50d257db50f8ac6163ff5b0bbdf2c06af4d6b0e94e85d8801" Jan 29 12:25:30 crc kubenswrapper[4593]: I0129 12:25:30.494394 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dw4s4/crc-debug-n8b5q" Jan 29 12:25:31 crc kubenswrapper[4593]: I0129 12:25:31.088017 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16cd7214-5ee4-4072-a42a-9a51b9deea30" path="/var/lib/kubelet/pods/16cd7214-5ee4-4072-a42a-9a51b9deea30/volumes" Jan 29 12:25:31 crc kubenswrapper[4593]: I0129 12:25:31.495862 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dw4s4/crc-debug-cnspd"] Jan 29 12:25:31 crc kubenswrapper[4593]: E0129 12:25:31.496797 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16cd7214-5ee4-4072-a42a-9a51b9deea30" containerName="container-00" Jan 29 12:25:31 crc kubenswrapper[4593]: I0129 12:25:31.496825 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="16cd7214-5ee4-4072-a42a-9a51b9deea30" containerName="container-00" Jan 29 12:25:31 crc kubenswrapper[4593]: I0129 12:25:31.497231 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="16cd7214-5ee4-4072-a42a-9a51b9deea30" containerName="container-00" Jan 29 12:25:31 crc kubenswrapper[4593]: I0129 12:25:31.498729 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dw4s4/crc-debug-cnspd" Jan 29 12:25:31 crc kubenswrapper[4593]: I0129 12:25:31.501552 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-dw4s4"/"default-dockercfg-gg8rn" Jan 29 12:25:31 crc kubenswrapper[4593]: I0129 12:25:31.684382 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms74m\" (UniqueName: \"kubernetes.io/projected/1325199a-5a2b-4b86-90a2-cbac24cc029c-kube-api-access-ms74m\") pod \"crc-debug-cnspd\" (UID: \"1325199a-5a2b-4b86-90a2-cbac24cc029c\") " pod="openshift-must-gather-dw4s4/crc-debug-cnspd" Jan 29 12:25:31 crc kubenswrapper[4593]: I0129 12:25:31.684469 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1325199a-5a2b-4b86-90a2-cbac24cc029c-host\") pod \"crc-debug-cnspd\" (UID: \"1325199a-5a2b-4b86-90a2-cbac24cc029c\") " pod="openshift-must-gather-dw4s4/crc-debug-cnspd" Jan 29 12:25:31 crc kubenswrapper[4593]: I0129 12:25:31.786567 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms74m\" (UniqueName: \"kubernetes.io/projected/1325199a-5a2b-4b86-90a2-cbac24cc029c-kube-api-access-ms74m\") pod \"crc-debug-cnspd\" (UID: \"1325199a-5a2b-4b86-90a2-cbac24cc029c\") " pod="openshift-must-gather-dw4s4/crc-debug-cnspd" Jan 29 12:25:31 crc kubenswrapper[4593]: I0129 12:25:31.786692 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1325199a-5a2b-4b86-90a2-cbac24cc029c-host\") pod \"crc-debug-cnspd\" (UID: \"1325199a-5a2b-4b86-90a2-cbac24cc029c\") " pod="openshift-must-gather-dw4s4/crc-debug-cnspd" Jan 29 12:25:31 crc kubenswrapper[4593]: I0129 12:25:31.786827 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1325199a-5a2b-4b86-90a2-cbac24cc029c-host\") pod \"crc-debug-cnspd\" (UID: \"1325199a-5a2b-4b86-90a2-cbac24cc029c\") " pod="openshift-must-gather-dw4s4/crc-debug-cnspd" Jan 29 12:25:31 crc kubenswrapper[4593]: I0129 12:25:31.810933 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms74m\" (UniqueName: \"kubernetes.io/projected/1325199a-5a2b-4b86-90a2-cbac24cc029c-kube-api-access-ms74m\") pod \"crc-debug-cnspd\" (UID: \"1325199a-5a2b-4b86-90a2-cbac24cc029c\") " pod="openshift-must-gather-dw4s4/crc-debug-cnspd" Jan 29 12:25:31 crc kubenswrapper[4593]: I0129 12:25:31.819761 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dw4s4/crc-debug-cnspd" Jan 29 12:25:31 crc kubenswrapper[4593]: W0129 12:25:31.877499 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1325199a_5a2b_4b86_90a2_cbac24cc029c.slice/crio-30972f9b26bedad4d62d801f62238e529d0bcd80f5cad19f7b83f0c1b499fdf7 WatchSource:0}: Error finding container 30972f9b26bedad4d62d801f62238e529d0bcd80f5cad19f7b83f0c1b499fdf7: Status 404 returned error can't find the container with id 30972f9b26bedad4d62d801f62238e529d0bcd80f5cad19f7b83f0c1b499fdf7 Jan 29 12:25:32 crc kubenswrapper[4593]: I0129 12:25:32.509769 4593 generic.go:334] "Generic (PLEG): container finished" podID="1325199a-5a2b-4b86-90a2-cbac24cc029c" containerID="29677e210c78aebc6aa79ae1c919cd251d1bef19cd76388c6269f96a8c5b559f" exitCode=0 Jan 29 12:25:32 crc kubenswrapper[4593]: I0129 12:25:32.510113 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dw4s4/crc-debug-cnspd" event={"ID":"1325199a-5a2b-4b86-90a2-cbac24cc029c","Type":"ContainerDied","Data":"29677e210c78aebc6aa79ae1c919cd251d1bef19cd76388c6269f96a8c5b559f"} Jan 29 12:25:32 crc kubenswrapper[4593]: I0129 12:25:32.510160 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dw4s4/crc-debug-cnspd" event={"ID":"1325199a-5a2b-4b86-90a2-cbac24cc029c","Type":"ContainerStarted","Data":"30972f9b26bedad4d62d801f62238e529d0bcd80f5cad19f7b83f0c1b499fdf7"} Jan 29 12:25:32 crc kubenswrapper[4593]: I0129 12:25:32.557026 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dw4s4/crc-debug-cnspd"] Jan 29 12:25:32 crc kubenswrapper[4593]: I0129 12:25:32.566040 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dw4s4/crc-debug-cnspd"] Jan 29 12:25:33 crc kubenswrapper[4593]: I0129 12:25:33.667236 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dw4s4/crc-debug-cnspd" Jan 29 12:25:33 crc kubenswrapper[4593]: I0129 12:25:33.835850 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1325199a-5a2b-4b86-90a2-cbac24cc029c-host\") pod \"1325199a-5a2b-4b86-90a2-cbac24cc029c\" (UID: \"1325199a-5a2b-4b86-90a2-cbac24cc029c\") " Jan 29 12:25:33 crc kubenswrapper[4593]: I0129 12:25:33.835928 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms74m\" (UniqueName: \"kubernetes.io/projected/1325199a-5a2b-4b86-90a2-cbac24cc029c-kube-api-access-ms74m\") pod \"1325199a-5a2b-4b86-90a2-cbac24cc029c\" (UID: \"1325199a-5a2b-4b86-90a2-cbac24cc029c\") " Jan 29 12:25:33 crc kubenswrapper[4593]: I0129 12:25:33.835984 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1325199a-5a2b-4b86-90a2-cbac24cc029c-host" (OuterVolumeSpecName: "host") pod "1325199a-5a2b-4b86-90a2-cbac24cc029c" (UID: "1325199a-5a2b-4b86-90a2-cbac24cc029c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:25:33 crc kubenswrapper[4593]: I0129 12:25:33.842828 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1325199a-5a2b-4b86-90a2-cbac24cc029c-kube-api-access-ms74m" (OuterVolumeSpecName: "kube-api-access-ms74m") pod "1325199a-5a2b-4b86-90a2-cbac24cc029c" (UID: "1325199a-5a2b-4b86-90a2-cbac24cc029c"). InnerVolumeSpecName "kube-api-access-ms74m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:25:33 crc kubenswrapper[4593]: I0129 12:25:33.937829 4593 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1325199a-5a2b-4b86-90a2-cbac24cc029c-host\") on node \"crc\" DevicePath \"\"" Jan 29 12:25:33 crc kubenswrapper[4593]: I0129 12:25:33.938125 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms74m\" (UniqueName: \"kubernetes.io/projected/1325199a-5a2b-4b86-90a2-cbac24cc029c-kube-api-access-ms74m\") on node \"crc\" DevicePath \"\"" Jan 29 12:25:34 crc kubenswrapper[4593]: I0129 12:25:34.536179 4593 scope.go:117] "RemoveContainer" containerID="29677e210c78aebc6aa79ae1c919cd251d1bef19cd76388c6269f96a8c5b559f" Jan 29 12:25:34 crc kubenswrapper[4593]: I0129 12:25:34.536355 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dw4s4/crc-debug-cnspd" Jan 29 12:25:35 crc kubenswrapper[4593]: I0129 12:25:35.086163 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1325199a-5a2b-4b86-90a2-cbac24cc029c" path="/var/lib/kubelet/pods/1325199a-5a2b-4b86-90a2-cbac24cc029c/volumes" Jan 29 12:26:03 crc kubenswrapper[4593]: I0129 12:26:03.947372 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:26:03 crc kubenswrapper[4593]: I0129 12:26:03.948071 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:26:30 crc kubenswrapper[4593]: I0129 12:26:30.665504 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-59844fc4b6-zctck_07d138d8-a5fa-4b77-80e5-924dba8de4c0/barbican-api/0.log" Jan 29 12:26:30 crc kubenswrapper[4593]: I0129 12:26:30.779973 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-59844fc4b6-zctck_07d138d8-a5fa-4b77-80e5-924dba8de4c0/barbican-api-log/0.log" Jan 29 12:26:31 crc kubenswrapper[4593]: I0129 12:26:31.574119 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6cf8bfd486-7dlhx_5f3c398f-928a-4f7e-9e76-6978b8a3673e/barbican-keystone-listener/0.log" Jan 29 12:26:31 crc kubenswrapper[4593]: I0129 12:26:31.592264 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5947965cdc-wl48v_564d3b50-7cec-4913-bac8-64af532aa32f/barbican-worker/0.log" Jan 29 12:26:31 crc kubenswrapper[4593]: I0129 12:26:31.639278 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6cf8bfd486-7dlhx_5f3c398f-928a-4f7e-9e76-6978b8a3673e/barbican-keystone-listener-log/0.log" Jan 29 12:26:31 crc kubenswrapper[4593]: I0129 12:26:31.853498 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5947965cdc-wl48v_564d3b50-7cec-4913-bac8-64af532aa32f/barbican-worker-log/0.log" Jan 29 12:26:31 crc kubenswrapper[4593]: I0129 12:26:31.928983 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz_e4241343-d4f5-4690-972e-55f054a93f30/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:26:32 crc kubenswrapper[4593]: I0129 12:26:32.139503 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8581bb16-8d35-4521-8886-3c71554a3a4d/ceilometer-central-agent/0.log" Jan 29 12:26:32 crc kubenswrapper[4593]: I0129 12:26:32.168245 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8581bb16-8d35-4521-8886-3c71554a3a4d/proxy-httpd/0.log" Jan 29 12:26:32 crc kubenswrapper[4593]: I0129 12:26:32.196521 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8581bb16-8d35-4521-8886-3c71554a3a4d/ceilometer-notification-agent/0.log" Jan 29 12:26:32 crc kubenswrapper[4593]: I0129 12:26:32.242019 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8581bb16-8d35-4521-8886-3c71554a3a4d/sg-core/0.log" Jan 29 12:26:32 crc kubenswrapper[4593]: I0129 12:26:32.472683 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c7ea14af-5b7c-44d6-a34c-1a344bfc96ef/cinder-api/0.log" Jan 29 12:26:32 crc kubenswrapper[4593]: I0129 12:26:32.497466 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c7ea14af-5b7c-44d6-a34c-1a344bfc96ef/cinder-api-log/0.log" Jan 29 12:26:32 crc kubenswrapper[4593]: I0129 12:26:32.747199 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_5516e5e9-a6e4-4877-bd34-af4128cc7e33/cinder-scheduler/0.log" Jan 29 12:26:32 crc kubenswrapper[4593]: I0129 12:26:32.784435 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_5516e5e9-a6e4-4877-bd34-af4128cc7e33/probe/0.log" Jan 29 12:26:33 crc kubenswrapper[4593]: I0129 12:26:33.502147 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5_83fa3cd4-ce6a-44bb-b652-c783504941f9/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:26:33 crc kubenswrapper[4593]: I0129 12:26:33.511706 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-27mbg_80d7dd41-691a-4411-97c2-91245d43b8ea/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:26:33 crc kubenswrapper[4593]: I0129 12:26:33.711942 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-67cb876dc9-mqmln_07012c75-f2fe-400a-b511-d0cc18a1ca9c/init/0.log" Jan 29 12:26:33 crc kubenswrapper[4593]: I0129 12:26:33.946421 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:26:33 crc kubenswrapper[4593]: I0129 12:26:33.946808 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:26:34 crc kubenswrapper[4593]: I0129 12:26:34.019672 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-g462j_fee0ef55-8edb-456c-9344-98a3b34d3aab/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:26:34 crc kubenswrapper[4593]: I0129 12:26:34.054110 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-67cb876dc9-mqmln_07012c75-f2fe-400a-b511-d0cc18a1ca9c/init/0.log" Jan 29 12:26:34 crc kubenswrapper[4593]: I0129 12:26:34.210851 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-67cb876dc9-mqmln_07012c75-f2fe-400a-b511-d0cc18a1ca9c/dnsmasq-dns/0.log" Jan 29 12:26:34 crc kubenswrapper[4593]: I0129 12:26:34.353890 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_43872652-3bb2-4a5c-9b13-cb25d625cd01/glance-log/0.log" Jan 29 12:26:34 crc kubenswrapper[4593]: I0129 12:26:34.410614 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_43872652-3bb2-4a5c-9b13-cb25d625cd01/glance-httpd/0.log" Jan 29 12:26:34 crc kubenswrapper[4593]: I0129 12:26:34.599026 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c4f0192e-509d-46a4-9a2a-c82106019381/glance-httpd/0.log" Jan 29 12:26:34 crc kubenswrapper[4593]: I0129 12:26:34.671937 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c4f0192e-509d-46a4-9a2a-c82106019381/glance-log/0.log" Jan 29 12:26:35 crc kubenswrapper[4593]: I0129 12:26:35.008881 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5bdffb4784-5zp8q_be4a01cd-2eb7-48e8-8a7e-eb02f8851188/horizon/2.log" Jan 29 12:26:35 crc kubenswrapper[4593]: I0129 12:26:35.046618 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5bdffb4784-5zp8q_be4a01cd-2eb7-48e8-8a7e-eb02f8851188/horizon/1.log" Jan 29 12:26:35 crc kubenswrapper[4593]: I0129 12:26:35.504271 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-x2n68_0418390b-7622-490c-ad95-ec5eac075440/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:26:35 crc kubenswrapper[4593]: I0129 12:26:35.507592 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-p4f88_62d982c9-eb7a-4d9d-9cdd-2248c63b06fb/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:26:35 crc kubenswrapper[4593]: I0129 12:26:35.811574 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5bdffb4784-5zp8q_be4a01cd-2eb7-48e8-8a7e-eb02f8851188/horizon-log/0.log" Jan 29 12:26:36 crc kubenswrapper[4593]: I0129 12:26:36.018306 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29494801-8jgxn_f7d47080-9737-4b86-9e40-a6c6bf7f1709/keystone-cron/0.log" Jan 29 12:26:36 crc kubenswrapper[4593]: I0129 12:26:36.108199 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_6d0c0ba2-e8ed-4361-8aff-e71714a1617c/kube-state-metrics/0.log" Jan 29 12:26:36 crc kubenswrapper[4593]: I0129 12:26:36.370984 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7f96568f6f-lfzv9_e2e767a2-2e4c-4a41-995f-1f0ca9248d1a/keystone-api/0.log" Jan 29 12:26:36 crc kubenswrapper[4593]: I0129 12:26:36.459256 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-jt98j_1f7fe168-4498-4002-9233-d6c2d9f115fb/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:26:37 crc kubenswrapper[4593]: I0129 12:26:37.106315 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct_4c7cff3f-040a-4499-825c-3cccd015326a/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:26:37 crc kubenswrapper[4593]: I0129 12:26:37.271984 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-84867bd7b9-4vrb9_174d0d16-4c6e-403a-bf10-0a69b4e98fb1/neutron-httpd/0.log" Jan 29 12:26:37 crc kubenswrapper[4593]: I0129 12:26:37.642268 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-84867bd7b9-4vrb9_174d0d16-4c6e-403a-bf10-0a69b4e98fb1/neutron-api/0.log" Jan 29 12:26:38 crc kubenswrapper[4593]: I0129 12:26:38.246584 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f/nova-cell0-conductor-conductor/0.log" Jan 29 12:26:38 crc kubenswrapper[4593]: I0129 12:26:38.410266 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_bee10dce-c68f-47f4-84e0-623f276964d8/nova-cell1-conductor-conductor/0.log" Jan 29 12:26:38 crc kubenswrapper[4593]: I0129 12:26:38.865428 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_0b25e9a9-4f12-4b7f-9001-74b6c3feb118/nova-cell1-novncproxy-novncproxy/0.log" Jan 29 12:26:38 crc kubenswrapper[4593]: I0129 12:26:38.881949 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_0d08c570-1374-4c5a-832e-c973d7a39796/nova-api-log/0.log" Jan 29 12:26:39 crc kubenswrapper[4593]: I0129 12:26:39.124372 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-rtfdg_f45f3aca-42e1-4105-b843-f5288550ce8c/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:26:39 crc kubenswrapper[4593]: I0129 12:26:39.300332 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_649faf5c-e6bb-4e3d-8cb5-28c57f100008/nova-metadata-log/0.log" Jan 29 12:26:39 crc kubenswrapper[4593]: I0129 12:26:39.391861 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_0d08c570-1374-4c5a-832e-c973d7a39796/nova-api-api/0.log" Jan 29 12:26:39 crc kubenswrapper[4593]: I0129 12:26:39.783293 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c1755998-9149-49be-b10f-c4fe029728bc/mysql-bootstrap/0.log" Jan 29 12:26:40 crc kubenswrapper[4593]: I0129 12:26:40.036405 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c1755998-9149-49be-b10f-c4fe029728bc/mysql-bootstrap/0.log" Jan 29 12:26:40 crc kubenswrapper[4593]: I0129 12:26:40.106986 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c1755998-9149-49be-b10f-c4fe029728bc/galera/0.log" Jan 29 12:26:40 crc kubenswrapper[4593]: I0129 12:26:40.255435 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_4eff0b9f-e2c4-4ae0-9b42-585f9141d740/nova-scheduler-scheduler/0.log" Jan 29 12:26:40 crc kubenswrapper[4593]: I0129 12:26:40.609832 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6674f537-f800-4b05-912c-b2671e504c17/mysql-bootstrap/0.log" Jan 29 12:26:40 crc kubenswrapper[4593]: I0129 12:26:40.877367 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6674f537-f800-4b05-912c-b2671e504c17/galera/0.log" Jan 29 12:26:40 crc kubenswrapper[4593]: I0129 12:26:40.926533 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6674f537-f800-4b05-912c-b2671e504c17/mysql-bootstrap/0.log" Jan 29 12:26:41 crc kubenswrapper[4593]: I0129 12:26:41.073309 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_220bdfcb-98c4-4c78-8d95-ea64edfaf1ab/openstackclient/0.log" Jan 29 12:26:41 crc kubenswrapper[4593]: I0129 12:26:41.383975 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-cc9qq_df5842a4-132b-4c71-a970-efe4f00a3827/ovn-controller/0.log" Jan 29 12:26:41 crc kubenswrapper[4593]: I0129 12:26:41.471827 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-g6lk4_9299d646-8191-4da6-a2d1-d5a8c6492e91/openstack-network-exporter/0.log" Jan 29 12:26:41 crc kubenswrapper[4593]: I0129 12:26:41.506443 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_649faf5c-e6bb-4e3d-8cb5-28c57f100008/nova-metadata-metadata/0.log" Jan 29 12:26:41 crc kubenswrapper[4593]: I0129 12:26:41.789943 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-x49lj_22811af4-f063-480b-81b2-6c09b6526fea/ovsdb-server-init/0.log" Jan 29 12:26:42 crc kubenswrapper[4593]: I0129 12:26:42.483685 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-x49lj_22811af4-f063-480b-81b2-6c09b6526fea/ovsdb-server-init/0.log" Jan 29 12:26:42 crc kubenswrapper[4593]: I0129 12:26:42.560558 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-x49lj_22811af4-f063-480b-81b2-6c09b6526fea/ovsdb-server/0.log" Jan 29 12:26:42 crc kubenswrapper[4593]: I0129 12:26:42.663419 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_dc6f5a6c-3bf0-4f78-89f3-1e2683a37958/memcached/0.log" Jan 29 12:26:42 crc kubenswrapper[4593]: I0129 12:26:42.823956 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5320cc21-470d-450c-afa0-c5926e3243c6/openstack-network-exporter/0.log" Jan 29 12:26:42 crc kubenswrapper[4593]: I0129 12:26:42.858952 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5320cc21-470d-450c-afa0-c5926e3243c6/ovn-northd/0.log" Jan 29 12:26:42 crc kubenswrapper[4593]: I0129 12:26:42.987384 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_fd9a4c00-318d-4bd1-85cb-40971234c3cd/openstack-network-exporter/0.log" Jan 29 12:26:43 crc kubenswrapper[4593]: I0129 12:26:43.214363 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9/openstack-network-exporter/0.log" Jan 29 12:26:43 crc kubenswrapper[4593]: I0129 12:26:43.731445 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_fd9a4c00-318d-4bd1-85cb-40971234c3cd/ovsdbserver-nb/0.log" Jan 29 12:26:43 crc kubenswrapper[4593]: I0129 12:26:43.731509 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9/ovsdbserver-sb/0.log" Jan 29 12:26:43 crc kubenswrapper[4593]: I0129 12:26:43.732023 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-x49lj_22811af4-f063-480b-81b2-6c09b6526fea/ovs-vswitchd/0.log" Jan 29 12:26:43 crc kubenswrapper[4593]: I0129 12:26:43.807848 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-ftxjl_80db2d7c-94e6-418b-a0b4-2b4064356e4b/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:26:43 crc kubenswrapper[4593]: I0129 12:26:43.968975 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-869645f564-n6fhc_ae8bb4fd-b1d8-4a6a-ac95-9935c4458747/placement-api/0.log" Jan 29 12:26:44 crc kubenswrapper[4593]: I0129 12:26:44.006530 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_66e64ba6-3c75-4430-9f03-0fe9dbb37459/setup-container/0.log" Jan 29 12:26:44 crc kubenswrapper[4593]: I0129 12:26:44.235564 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_66e64ba6-3c75-4430-9f03-0fe9dbb37459/rabbitmq/0.log" Jan 29 12:26:44 crc kubenswrapper[4593]: I0129 12:26:44.250404 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-869645f564-n6fhc_ae8bb4fd-b1d8-4a6a-ac95-9935c4458747/placement-log/0.log" Jan 29 12:26:44 crc kubenswrapper[4593]: I0129 12:26:44.278932 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_66e64ba6-3c75-4430-9f03-0fe9dbb37459/setup-container/0.log" Jan 29 12:26:44 crc kubenswrapper[4593]: I0129 12:26:44.360756 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_63184534-fd04-4ef9-9c56-de6c30745ec4/setup-container/0.log" Jan 29 12:26:44 crc kubenswrapper[4593]: I0129 12:26:44.544950 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_63184534-fd04-4ef9-9c56-de6c30745ec4/setup-container/0.log" Jan 29 12:26:44 crc kubenswrapper[4593]: I0129 12:26:44.679642 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-jps44_9a263e61-6654-4030-bd96-c1baa9314111/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:26:44 crc kubenswrapper[4593]: I0129 12:26:44.682033 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_63184534-fd04-4ef9-9c56-de6c30745ec4/rabbitmq/0.log" Jan 29 12:26:44 crc kubenswrapper[4593]: I0129 12:26:44.867162 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-7tzj5_ce80c16f-5109-46b9-9438-4f05a4132175/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:26:44 crc kubenswrapper[4593]: I0129 12:26:44.893617 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb_c3e4e3e3-1994-40a5-bab8-d84db2f44ddb/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:26:44 crc kubenswrapper[4593]: I0129 12:26:44.957701 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-lz46t_b1f286ec-6f85-44c4-94f5-f66bc21c2a64/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:26:45 crc kubenswrapper[4593]: I0129 12:26:45.129190 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-cfk97_c22e1d76-6585-46e2-9c31-5c002e021882/ssh-known-hosts-edpm-deployment/0.log" Jan 29 12:26:45 crc kubenswrapper[4593]: I0129 12:26:45.390377 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-58d6d94967-wdzcg_f1bc6621-0892-452c-9f95-54554f8c6e68/proxy-httpd/0.log" Jan 29 12:26:45 crc kubenswrapper[4593]: I0129 12:26:45.413731 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-jbnzf_4d1e7e96-e120-43f1-bff0-ea3d624e621b/swift-ring-rebalance/0.log" Jan 29 12:26:45 crc kubenswrapper[4593]: I0129 12:26:45.454236 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-58d6d94967-wdzcg_f1bc6621-0892-452c-9f95-54554f8c6e68/proxy-server/0.log" Jan 29 12:26:45 crc kubenswrapper[4593]: I0129 12:26:45.657621 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/account-auditor/0.log" Jan 29 12:26:45 crc kubenswrapper[4593]: I0129 12:26:45.686478 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/account-reaper/0.log" Jan 29 12:26:45 crc kubenswrapper[4593]: I0129 12:26:45.718402 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/account-replicator/0.log" Jan 29 12:26:45 crc kubenswrapper[4593]: I0129 12:26:45.783307 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/container-auditor/0.log" Jan 29 12:26:45 crc kubenswrapper[4593]: I0129 12:26:45.978639 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/account-server/0.log" Jan 29 12:26:45 crc kubenswrapper[4593]: I0129 12:26:45.989777 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/object-auditor/0.log" Jan 29 12:26:46 crc kubenswrapper[4593]: I0129 12:26:46.057332 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/container-server/0.log" Jan 29 12:26:46 crc kubenswrapper[4593]: I0129 12:26:46.094836 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/container-updater/0.log" Jan 29 12:26:46 crc kubenswrapper[4593]: I0129 12:26:46.098960 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/container-replicator/0.log" Jan 29 12:26:46 crc kubenswrapper[4593]: I0129 12:26:46.240681 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/object-expirer/0.log" Jan 29 12:26:46 crc kubenswrapper[4593]: I0129 12:26:46.279884 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/object-server/0.log" Jan 29 12:26:46 crc kubenswrapper[4593]: I0129 12:26:46.301590 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/object-replicator/0.log" Jan 29 12:26:46 crc kubenswrapper[4593]: I0129 12:26:46.305477 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/object-updater/0.log" Jan 29 12:26:46 crc kubenswrapper[4593]: I0129 12:26:46.344092 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/rsync/0.log" Jan 29 12:26:46 crc kubenswrapper[4593]: I0129 12:26:46.498628 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/swift-recon-cron/0.log" Jan 29 12:26:46 crc kubenswrapper[4593]: I0129 12:26:46.583405 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz_ee0ea7fe-3ea4-4944-8101-b03f1566882f/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:26:46 crc kubenswrapper[4593]: I0129 12:26:46.615005 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_d5ea9892-a149-4cfe-bb9c-ef636eacd125/tempest-tests-tempest-tests-runner/0.log" Jan 29 12:26:46 crc kubenswrapper[4593]: I0129 12:26:46.763913 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_be3a2ae9-6f0e-459e-bd91-10a92871767c/test-operator-logs-container/0.log" Jan 29 12:26:46 crc kubenswrapper[4593]: I0129 12:26:46.848171 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p_0f5fb9be-3781-4b9a-96d8-705593907345/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:27:03 crc kubenswrapper[4593]: I0129 12:27:03.945834 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:27:03 crc kubenswrapper[4593]: I0129 12:27:03.946479 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:27:03 crc kubenswrapper[4593]: I0129 12:27:03.946533 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 12:27:03 crc kubenswrapper[4593]: I0129 12:27:03.947352 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:27:03 crc kubenswrapper[4593]: I0129 12:27:03.947420 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" gracePeriod=600 Jan 29 12:27:04 crc kubenswrapper[4593]: E0129 12:27:04.291491 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:27:04 crc kubenswrapper[4593]: I0129 12:27:04.413363 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" exitCode=0 Jan 29 12:27:04 crc kubenswrapper[4593]: I0129 12:27:04.413410 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f"} Jan 29 12:27:04 crc kubenswrapper[4593]: I0129 12:27:04.413459 4593 scope.go:117] "RemoveContainer" containerID="0c951b718f5f8a81543c1227b8e681ac1add853c973a503786430be2a5132d27" Jan 29 12:27:04 crc kubenswrapper[4593]: I0129 12:27:04.414244 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:27:04 crc kubenswrapper[4593]: E0129 12:27:04.414476 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:27:16 crc kubenswrapper[4593]: I0129 12:27:16.074623 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:27:16 crc kubenswrapper[4593]: E0129 12:27:16.075437 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:27:18 crc kubenswrapper[4593]: I0129 12:27:18.466326 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc_d389d4ca-e0e5-4a15-8ff2-afa4745998fa/util/0.log" Jan 29 12:27:18 crc kubenswrapper[4593]: I0129 12:27:18.746268 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc_d389d4ca-e0e5-4a15-8ff2-afa4745998fa/pull/0.log" Jan 29 12:27:18 crc kubenswrapper[4593]: I0129 12:27:18.781176 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc_d389d4ca-e0e5-4a15-8ff2-afa4745998fa/util/0.log" Jan 29 12:27:18 crc kubenswrapper[4593]: I0129 12:27:18.787419 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc_d389d4ca-e0e5-4a15-8ff2-afa4745998fa/pull/0.log" Jan 29 12:27:19 crc kubenswrapper[4593]: I0129 12:27:19.066096 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc_d389d4ca-e0e5-4a15-8ff2-afa4745998fa/pull/0.log" Jan 29 12:27:19 crc kubenswrapper[4593]: I0129 12:27:19.072199 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc_d389d4ca-e0e5-4a15-8ff2-afa4745998fa/extract/0.log" Jan 29 12:27:19 crc kubenswrapper[4593]: I0129 12:27:19.107774 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc_d389d4ca-e0e5-4a15-8ff2-afa4745998fa/util/0.log" Jan 29 12:27:19 crc kubenswrapper[4593]: I0129 12:27:19.415208 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-7ns7q_c5e6d3a8-d6d9-4445-9708-28b88928333e/manager/0.log" Jan 29 12:27:19 crc kubenswrapper[4593]: I0129 12:27:19.462061 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-7hmqc_e35e9127-0337-4860-b938-bb477a408f1e/manager/0.log" Jan 29 12:27:19 crc kubenswrapper[4593]: I0129 12:27:19.612524 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-xw2pz_734187ee-1e17-4cdc-b3bb-cfbd6e424793/manager/0.log" Jan 29 12:27:19 crc kubenswrapper[4593]: I0129 12:27:19.868919 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-2ml7m_499923d8-4593-4225-bc4c-6166003a0bb3/manager/0.log" Jan 29 12:27:19 crc kubenswrapper[4593]: I0129 12:27:19.919948 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-xqcrc_50471b23-1d0d-4bd9-a66f-a89b3a39a612/manager/0.log" Jan 29 12:27:20 crc kubenswrapper[4593]: I0129 12:27:20.130392 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-98l2v_50a8381e-e59b-4400-9209-c33ef4f99c23/manager/0.log" Jan 29 12:27:20 crc kubenswrapper[4593]: I0129 12:27:20.465681 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-6zkvt_c2cda883-37e6-4c21-b320-4962ffdc98b3/manager/0.log" Jan 29 12:27:20 crc kubenswrapper[4593]: I0129 12:27:20.500411 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-t584q_812ebcfb-766d-4a1b-aaaa-2dd5a96ce047/manager/0.log" Jan 29 12:27:21 crc kubenswrapper[4593]: I0129 12:27:21.070180 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-xf5fn_cdb96936-cd34-44fd-94b5-5570688fdfe6/manager/0.log" Jan 29 12:27:21 crc kubenswrapper[4593]: I0129 12:27:21.094056 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-c89cq_0881deda-c42a-48d8-9059-b7eaf66c0f9f/manager/0.log" Jan 29 12:27:21 crc kubenswrapper[4593]: I0129 12:27:21.385474 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-zx6r8_62efedcb-a194-4692-8e84-a0da7777a400/manager/0.log" Jan 29 12:27:21 crc kubenswrapper[4593]: I0129 12:27:21.403681 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-qt87l_336c4e93-7d0b-4570-aafc-22e0f812db12/manager/0.log" Jan 29 12:27:21 crc kubenswrapper[4593]: I0129 12:27:21.745238 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-8kf6p_40ab1792-0354-4c78-ac44-a217fbc610a9/manager/0.log" Jan 29 12:27:21 crc kubenswrapper[4593]: I0129 12:27:21.757849 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-9dbds_ba6fb45a-59ff-42ee-acb0-0ee43d001e1e/manager/0.log" Jan 29 12:27:22 crc kubenswrapper[4593]: I0129 12:27:22.040052 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb_f6e2fc57-0cce-4f5a-bf3e-63efbfff1073/manager/0.log" Jan 29 12:27:22 crc kubenswrapper[4593]: I0129 12:27:22.236288 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-55ccc59995-d7xm7_c8e623f1-2830-4c78-b17a-6000f32978a3/operator/0.log" Jan 29 12:27:22 crc kubenswrapper[4593]: I0129 12:27:22.626688 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-sbxwt_0661b605-afb6-4341-9703-ea25a3afc19d/registry-server/0.log" Jan 29 12:27:22 crc kubenswrapper[4593]: I0129 12:27:22.993011 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-885pn_9b88fe2c-a82a-4284-961a-8af3818815ec/manager/0.log" Jan 29 12:27:23 crc kubenswrapper[4593]: I0129 12:27:23.171544 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-kttv8_2c7ec826-43f0-49f3-9d96-4330427e4ed9/manager/0.log" Jan 29 12:27:23 crc kubenswrapper[4593]: I0129 12:27:23.324712 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6d898fd894-sh94p_960bb326-dc22-43e5-bc4f-05c9ce9e26a9/manager/0.log" Jan 29 12:27:23 crc kubenswrapper[4593]: I0129 12:27:23.342350 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-tfkk2_2f32633b-0490-4885-9543-a140c807c742/operator/0.log" Jan 29 12:27:23 crc kubenswrapper[4593]: I0129 12:27:23.734671 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-k4b7q_0e86fa54-1e41-4bb9-86c7-a0ea0d919270/manager/0.log" Jan 29 12:27:23 crc kubenswrapper[4593]: I0129 12:27:23.911457 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-z4mp8_ea8d9bb8-bdec-453d-a308-28b962971254/manager/0.log" Jan 29 12:27:24 crc kubenswrapper[4593]: I0129 12:27:24.062798 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-ltfr4_b45fb247-850e-40b4-b62e-8551d55efcba/manager/0.log" Jan 29 12:27:24 crc kubenswrapper[4593]: I0129 12:27:24.174112 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-zmssx_0259a320-8da9-48e5-8d73-25b09774d9c1/manager/0.log" Jan 29 12:27:28 crc kubenswrapper[4593]: I0129 12:27:28.075034 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:27:28 crc kubenswrapper[4593]: E0129 12:27:28.075582 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:27:41 crc kubenswrapper[4593]: I0129 12:27:41.076007 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:27:41 crc kubenswrapper[4593]: E0129 12:27:41.080030 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:27:47 crc kubenswrapper[4593]: I0129 12:27:47.983000 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-pf5p2_9bce548b-2c64-4ac5-a797-979de4cf7656/control-plane-machine-set-operator/0.log" Jan 29 12:27:48 crc kubenswrapper[4593]: I0129 12:27:48.183146 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vtdww_bb259eac-6aa7-42d9-883b-2af6b63af4b8/machine-api-operator/0.log" Jan 29 12:27:48 crc kubenswrapper[4593]: I0129 12:27:48.238367 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vtdww_bb259eac-6aa7-42d9-883b-2af6b63af4b8/kube-rbac-proxy/0.log" Jan 29 12:27:52 crc kubenswrapper[4593]: I0129 12:27:52.075992 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:27:52 crc kubenswrapper[4593]: E0129 12:27:52.077322 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:28:01 crc kubenswrapper[4593]: I0129 12:28:01.682276 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-qhfhj_59d387c2-4d0b-4d6c-a0d8-2230657bebd0/cert-manager-controller/0.log" Jan 29 12:28:02 crc kubenswrapper[4593]: I0129 12:28:02.246025 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-lw7j7_79aa2cc5-a031-412d-a4c7-ba9251f84ed6/cert-manager-cainjector/0.log" Jan 29 12:28:02 crc kubenswrapper[4593]: I0129 12:28:02.426465 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-t7s4r_e2b5756a-c46e-4e76-90bf-0a5c7c1dc759/cert-manager-webhook/0.log" Jan 29 12:28:05 crc kubenswrapper[4593]: I0129 12:28:05.090131 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:28:05 crc kubenswrapper[4593]: E0129 12:28:05.090952 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:28:15 crc kubenswrapper[4593]: I0129 12:28:15.823443 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-nck62_2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2/nmstate-console-plugin/0.log" Jan 29 12:28:16 crc kubenswrapper[4593]: I0129 12:28:16.034604 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-q2lbc_ea391d24-e32c-440b-b5c2-218920192125/nmstate-handler/0.log" Jan 29 12:28:16 crc kubenswrapper[4593]: I0129 12:28:16.277254 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-q2995_7a32568f-244c-432b-8186-683e8bc10371/kube-rbac-proxy/0.log" Jan 29 12:28:16 crc kubenswrapper[4593]: I0129 12:28:16.298965 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-q2995_7a32568f-244c-432b-8186-683e8bc10371/nmstate-metrics/0.log" Jan 29 12:28:16 crc kubenswrapper[4593]: I0129 12:28:16.432187 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-xmhmc_b2e0c4ff-8a2b-474d-8414-a0026d61b11e/nmstate-operator/0.log" Jan 29 12:28:16 crc kubenswrapper[4593]: I0129 12:28:16.513449 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-47n46_72d4f068-dc20-44d0-aca6-c8f0992536e6/nmstate-webhook/0.log" Jan 29 12:28:19 crc kubenswrapper[4593]: I0129 12:28:19.079375 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:28:19 crc kubenswrapper[4593]: E0129 12:28:19.079992 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:28:34 crc kubenswrapper[4593]: I0129 12:28:34.075493 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:28:34 crc kubenswrapper[4593]: E0129 12:28:34.076350 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:28:35 crc kubenswrapper[4593]: I0129 12:28:35.352099 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t8n82"] Jan 29 12:28:35 crc kubenswrapper[4593]: E0129 12:28:35.352716 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1325199a-5a2b-4b86-90a2-cbac24cc029c" containerName="container-00" Jan 29 12:28:35 crc kubenswrapper[4593]: I0129 12:28:35.352733 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="1325199a-5a2b-4b86-90a2-cbac24cc029c" containerName="container-00" Jan 29 12:28:35 crc kubenswrapper[4593]: I0129 12:28:35.353013 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="1325199a-5a2b-4b86-90a2-cbac24cc029c" containerName="container-00" Jan 29 12:28:35 crc kubenswrapper[4593]: I0129 12:28:35.361045 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:28:35 crc kubenswrapper[4593]: I0129 12:28:35.426178 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t8n82"] Jan 29 12:28:35 crc kubenswrapper[4593]: I0129 12:28:35.441063 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b4febee-8f26-4e76-a4b6-09da10523b68-utilities\") pod \"redhat-operators-t8n82\" (UID: \"5b4febee-8f26-4e76-a4b6-09da10523b68\") " pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:28:35 crc kubenswrapper[4593]: I0129 12:28:35.441212 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b4febee-8f26-4e76-a4b6-09da10523b68-catalog-content\") pod \"redhat-operators-t8n82\" (UID: \"5b4febee-8f26-4e76-a4b6-09da10523b68\") " pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:28:35 crc kubenswrapper[4593]: I0129 12:28:35.441270 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm2nc\" (UniqueName: \"kubernetes.io/projected/5b4febee-8f26-4e76-a4b6-09da10523b68-kube-api-access-lm2nc\") pod \"redhat-operators-t8n82\" (UID: \"5b4febee-8f26-4e76-a4b6-09da10523b68\") " pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:28:35 crc kubenswrapper[4593]: I0129 12:28:35.542615 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b4febee-8f26-4e76-a4b6-09da10523b68-utilities\") pod \"redhat-operators-t8n82\" (UID: \"5b4febee-8f26-4e76-a4b6-09da10523b68\") " pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:28:35 crc kubenswrapper[4593]: I0129 12:28:35.543012 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b4febee-8f26-4e76-a4b6-09da10523b68-catalog-content\") pod \"redhat-operators-t8n82\" (UID: \"5b4febee-8f26-4e76-a4b6-09da10523b68\") " pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:28:35 crc kubenswrapper[4593]: I0129 12:28:35.543266 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm2nc\" (UniqueName: \"kubernetes.io/projected/5b4febee-8f26-4e76-a4b6-09da10523b68-kube-api-access-lm2nc\") pod \"redhat-operators-t8n82\" (UID: \"5b4febee-8f26-4e76-a4b6-09da10523b68\") " pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:28:35 crc kubenswrapper[4593]: I0129 12:28:35.546033 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b4febee-8f26-4e76-a4b6-09da10523b68-catalog-content\") pod \"redhat-operators-t8n82\" (UID: \"5b4febee-8f26-4e76-a4b6-09da10523b68\") " pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:28:35 crc kubenswrapper[4593]: I0129 12:28:35.546552 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b4febee-8f26-4e76-a4b6-09da10523b68-utilities\") pod \"redhat-operators-t8n82\" (UID: \"5b4febee-8f26-4e76-a4b6-09da10523b68\") " pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:28:35 crc kubenswrapper[4593]: I0129 12:28:35.564034 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm2nc\" (UniqueName: \"kubernetes.io/projected/5b4febee-8f26-4e76-a4b6-09da10523b68-kube-api-access-lm2nc\") pod \"redhat-operators-t8n82\" (UID: \"5b4febee-8f26-4e76-a4b6-09da10523b68\") " pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:28:35 crc kubenswrapper[4593]: I0129 12:28:35.718691 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:28:36 crc kubenswrapper[4593]: I0129 12:28:36.239906 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t8n82"] Jan 29 12:28:36 crc kubenswrapper[4593]: I0129 12:28:36.311035 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8n82" event={"ID":"5b4febee-8f26-4e76-a4b6-09da10523b68","Type":"ContainerStarted","Data":"f009cbeccec362360001e7cb5c502e81a1edd3147f1f8aade495c66564bbfd8c"} Jan 29 12:28:37 crc kubenswrapper[4593]: I0129 12:28:37.332859 4593 generic.go:334] "Generic (PLEG): container finished" podID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerID="326801ee869f568d18145038bfc3feeb923901fc80f9ebe2dd1bfa5dfa227fba" exitCode=0 Jan 29 12:28:37 crc kubenswrapper[4593]: I0129 12:28:37.333177 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8n82" event={"ID":"5b4febee-8f26-4e76-a4b6-09da10523b68","Type":"ContainerDied","Data":"326801ee869f568d18145038bfc3feeb923901fc80f9ebe2dd1bfa5dfa227fba"} Jan 29 12:28:39 crc kubenswrapper[4593]: I0129 12:28:39.354436 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8n82" event={"ID":"5b4febee-8f26-4e76-a4b6-09da10523b68","Type":"ContainerStarted","Data":"cd45d0278c2bcae6a565207daa122f821a4b42623055e66d7bdb3205bf89dcd8"} Jan 29 12:28:48 crc kubenswrapper[4593]: I0129 12:28:48.332965 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-hvqbg_3462ad7c-24f3-4c73-990d-a0f471d08d1d/kube-rbac-proxy/0.log" Jan 29 12:28:48 crc kubenswrapper[4593]: I0129 12:28:48.365485 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-hvqbg_3462ad7c-24f3-4c73-990d-a0f471d08d1d/controller/0.log" Jan 29 12:28:48 crc kubenswrapper[4593]: I0129 12:28:48.930137 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-frr-files/0.log" Jan 29 12:28:49 crc kubenswrapper[4593]: I0129 12:28:49.075548 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:28:49 crc kubenswrapper[4593]: E0129 12:28:49.075864 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:28:49 crc kubenswrapper[4593]: I0129 12:28:49.107738 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-frr-files/0.log" Jan 29 12:28:49 crc kubenswrapper[4593]: I0129 12:28:49.165568 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-metrics/0.log" Jan 29 12:28:49 crc kubenswrapper[4593]: I0129 12:28:49.190507 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-reloader/0.log" Jan 29 12:28:49 crc kubenswrapper[4593]: I0129 12:28:49.207398 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-reloader/0.log" Jan 29 12:28:49 crc kubenswrapper[4593]: I0129 12:28:49.375088 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-reloader/0.log" Jan 29 12:28:49 crc kubenswrapper[4593]: I0129 12:28:49.444140 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-metrics/0.log" Jan 29 12:28:49 crc kubenswrapper[4593]: I0129 12:28:49.479675 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-metrics/0.log" Jan 29 12:28:49 crc kubenswrapper[4593]: I0129 12:28:49.483774 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-frr-files/0.log" Jan 29 12:28:49 crc kubenswrapper[4593]: I0129 12:28:49.676600 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-frr-files/0.log" Jan 29 12:28:49 crc kubenswrapper[4593]: I0129 12:28:49.712121 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-metrics/0.log" Jan 29 12:28:49 crc kubenswrapper[4593]: I0129 12:28:49.722379 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-reloader/0.log" Jan 29 12:28:49 crc kubenswrapper[4593]: I0129 12:28:49.723842 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/controller/0.log" Jan 29 12:28:49 crc kubenswrapper[4593]: I0129 12:28:49.960039 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/kube-rbac-proxy-frr/0.log" Jan 29 12:28:49 crc kubenswrapper[4593]: I0129 12:28:49.990593 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/kube-rbac-proxy/0.log" Jan 29 12:28:50 crc kubenswrapper[4593]: I0129 12:28:50.027379 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/frr-metrics/0.log" Jan 29 12:28:50 crc kubenswrapper[4593]: I0129 12:28:50.298941 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-dj42h_45d808cf-80c4-4f7b-a172-76e4ecd9e37b/frr-k8s-webhook-server/0.log" Jan 29 12:28:50 crc kubenswrapper[4593]: I0129 12:28:50.399954 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/reloader/0.log" Jan 29 12:28:50 crc kubenswrapper[4593]: I0129 12:28:50.731622 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5bf4d9f4bd-ll9bk_421156e9-d8d3-4112-bd58-d09c40a70a12/manager/0.log" Jan 29 12:28:50 crc kubenswrapper[4593]: I0129 12:28:50.837248 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7fdc78c47c-w2tv4_c3381187-83f6-4877-8d72-3ed30f360a86/webhook-server/0.log" Jan 29 12:28:51 crc kubenswrapper[4593]: I0129 12:28:51.157039 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-m77zw_37969e5d-3111-45cc-a711-da443a473c52/kube-rbac-proxy/0.log" Jan 29 12:28:51 crc kubenswrapper[4593]: I0129 12:28:51.659375 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-m77zw_37969e5d-3111-45cc-a711-da443a473c52/speaker/0.log" Jan 29 12:28:51 crc kubenswrapper[4593]: I0129 12:28:51.760748 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/frr/0.log" Jan 29 12:28:52 crc kubenswrapper[4593]: I0129 12:28:52.468520 4593 generic.go:334] "Generic (PLEG): container finished" podID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerID="cd45d0278c2bcae6a565207daa122f821a4b42623055e66d7bdb3205bf89dcd8" exitCode=0 Jan 29 12:28:52 crc kubenswrapper[4593]: I0129 12:28:52.468568 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8n82" event={"ID":"5b4febee-8f26-4e76-a4b6-09da10523b68","Type":"ContainerDied","Data":"cd45d0278c2bcae6a565207daa122f821a4b42623055e66d7bdb3205bf89dcd8"} Jan 29 12:28:54 crc kubenswrapper[4593]: I0129 12:28:54.487759 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8n82" event={"ID":"5b4febee-8f26-4e76-a4b6-09da10523b68","Type":"ContainerStarted","Data":"00ff006605dd6dc0baa5b63261f7a4ff3fef69362f56ba2ac014140ec83c7278"} Jan 29 12:28:54 crc kubenswrapper[4593]: I0129 12:28:54.514744 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t8n82" podStartSLOduration=3.120361058 podStartE2EDuration="19.51470227s" podCreationTimestamp="2026-01-29 12:28:35 +0000 UTC" firstStartedPulling="2026-01-29 12:28:37.33635985 +0000 UTC m=+5383.209394041" lastFinishedPulling="2026-01-29 12:28:53.730701052 +0000 UTC m=+5399.603735253" observedRunningTime="2026-01-29 12:28:54.512564942 +0000 UTC m=+5400.385599143" watchObservedRunningTime="2026-01-29 12:28:54.51470227 +0000 UTC m=+5400.387736471" Jan 29 12:28:55 crc kubenswrapper[4593]: I0129 12:28:55.720389 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:28:55 crc kubenswrapper[4593]: I0129 12:28:55.720451 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:28:56 crc kubenswrapper[4593]: I0129 12:28:56.775546 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t8n82" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" probeResult="failure" output=< Jan 29 12:28:56 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:28:56 crc kubenswrapper[4593]: > Jan 29 12:29:03 crc kubenswrapper[4593]: I0129 12:29:03.075845 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:29:03 crc kubenswrapper[4593]: E0129 12:29:03.077921 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:29:06 crc kubenswrapper[4593]: I0129 12:29:06.691312 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz_ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11/util/0.log" Jan 29 12:29:06 crc kubenswrapper[4593]: I0129 12:29:06.767130 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t8n82" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" probeResult="failure" output=< Jan 29 12:29:06 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:29:06 crc kubenswrapper[4593]: > Jan 29 12:29:06 crc kubenswrapper[4593]: I0129 12:29:06.981621 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz_ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11/util/0.log" Jan 29 12:29:06 crc kubenswrapper[4593]: I0129 12:29:06.988970 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz_ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11/pull/0.log" Jan 29 12:29:07 crc kubenswrapper[4593]: I0129 12:29:07.040420 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz_ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11/pull/0.log" Jan 29 12:29:07 crc kubenswrapper[4593]: I0129 12:29:07.212558 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz_ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11/util/0.log" Jan 29 12:29:07 crc kubenswrapper[4593]: I0129 12:29:07.291075 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz_ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11/extract/0.log" Jan 29 12:29:07 crc kubenswrapper[4593]: I0129 12:29:07.293038 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz_ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11/pull/0.log" Jan 29 12:29:07 crc kubenswrapper[4593]: I0129 12:29:07.454848 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w_b514f100-8029-41bf-9315-9e8c18a7238a/util/0.log" Jan 29 12:29:07 crc kubenswrapper[4593]: I0129 12:29:07.681710 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w_b514f100-8029-41bf-9315-9e8c18a7238a/pull/0.log" Jan 29 12:29:07 crc kubenswrapper[4593]: I0129 12:29:07.705297 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w_b514f100-8029-41bf-9315-9e8c18a7238a/pull/0.log" Jan 29 12:29:07 crc kubenswrapper[4593]: I0129 12:29:07.772314 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w_b514f100-8029-41bf-9315-9e8c18a7238a/util/0.log" Jan 29 12:29:08 crc kubenswrapper[4593]: I0129 12:29:08.024593 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w_b514f100-8029-41bf-9315-9e8c18a7238a/util/0.log" Jan 29 12:29:08 crc kubenswrapper[4593]: I0129 12:29:08.025246 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w_b514f100-8029-41bf-9315-9e8c18a7238a/pull/0.log" Jan 29 12:29:08 crc kubenswrapper[4593]: I0129 12:29:08.074522 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w_b514f100-8029-41bf-9315-9e8c18a7238a/extract/0.log" Jan 29 12:29:08 crc kubenswrapper[4593]: I0129 12:29:08.254597 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kt56h_f0d1455d-ba27-48f0-be57-3d8e91a63853/extract-utilities/0.log" Jan 29 12:29:08 crc kubenswrapper[4593]: I0129 12:29:08.490869 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kt56h_f0d1455d-ba27-48f0-be57-3d8e91a63853/extract-utilities/0.log" Jan 29 12:29:08 crc kubenswrapper[4593]: I0129 12:29:08.600256 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kt56h_f0d1455d-ba27-48f0-be57-3d8e91a63853/extract-content/0.log" Jan 29 12:29:08 crc kubenswrapper[4593]: I0129 12:29:08.662077 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kt56h_f0d1455d-ba27-48f0-be57-3d8e91a63853/extract-content/0.log" Jan 29 12:29:08 crc kubenswrapper[4593]: I0129 12:29:08.825087 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kt56h_f0d1455d-ba27-48f0-be57-3d8e91a63853/extract-utilities/0.log" Jan 29 12:29:08 crc kubenswrapper[4593]: I0129 12:29:08.893559 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kt56h_f0d1455d-ba27-48f0-be57-3d8e91a63853/extract-content/0.log" Jan 29 12:29:09 crc kubenswrapper[4593]: I0129 12:29:09.237942 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-57v5l_3ae70d27-10ec-4015-851d-d84aaf99d782/extract-utilities/0.log" Jan 29 12:29:09 crc kubenswrapper[4593]: I0129 12:29:09.416915 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-57v5l_3ae70d27-10ec-4015-851d-d84aaf99d782/extract-utilities/0.log" Jan 29 12:29:09 crc kubenswrapper[4593]: I0129 12:29:09.430625 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-57v5l_3ae70d27-10ec-4015-851d-d84aaf99d782/extract-content/0.log" Jan 29 12:29:09 crc kubenswrapper[4593]: I0129 12:29:09.502515 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kt56h_f0d1455d-ba27-48f0-be57-3d8e91a63853/registry-server/0.log" Jan 29 12:29:09 crc kubenswrapper[4593]: I0129 12:29:09.506579 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-57v5l_3ae70d27-10ec-4015-851d-d84aaf99d782/extract-content/0.log" Jan 29 12:29:09 crc kubenswrapper[4593]: I0129 12:29:09.764151 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-57v5l_3ae70d27-10ec-4015-851d-d84aaf99d782/extract-content/0.log" Jan 29 12:29:09 crc kubenswrapper[4593]: I0129 12:29:09.794243 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-57v5l_3ae70d27-10ec-4015-851d-d84aaf99d782/extract-utilities/0.log" Jan 29 12:29:10 crc kubenswrapper[4593]: I0129 12:29:10.199012 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-s2rlp_7a59fe58-c900-46ea-8ff2-8a7f49210dc3/marketplace-operator/0.log" Jan 29 12:29:10 crc kubenswrapper[4593]: I0129 12:29:10.345980 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2f96_69a313ce-b443-4080-9eea-bde0c61dc59d/extract-utilities/0.log" Jan 29 12:29:10 crc kubenswrapper[4593]: I0129 12:29:10.472149 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2f96_69a313ce-b443-4080-9eea-bde0c61dc59d/extract-utilities/0.log" Jan 29 12:29:10 crc kubenswrapper[4593]: I0129 12:29:10.542145 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2f96_69a313ce-b443-4080-9eea-bde0c61dc59d/extract-content/0.log" Jan 29 12:29:10 crc kubenswrapper[4593]: I0129 12:29:10.549099 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-57v5l_3ae70d27-10ec-4015-851d-d84aaf99d782/registry-server/0.log" Jan 29 12:29:10 crc kubenswrapper[4593]: I0129 12:29:10.641563 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2f96_69a313ce-b443-4080-9eea-bde0c61dc59d/extract-content/0.log" Jan 29 12:29:10 crc kubenswrapper[4593]: I0129 12:29:10.848076 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2f96_69a313ce-b443-4080-9eea-bde0c61dc59d/extract-content/0.log" Jan 29 12:29:10 crc kubenswrapper[4593]: I0129 12:29:10.892219 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2f96_69a313ce-b443-4080-9eea-bde0c61dc59d/extract-utilities/0.log" Jan 29 12:29:11 crc kubenswrapper[4593]: I0129 12:29:11.051036 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2f96_69a313ce-b443-4080-9eea-bde0c61dc59d/registry-server/0.log" Jan 29 12:29:11 crc kubenswrapper[4593]: I0129 12:29:11.159261 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t8n82_5b4febee-8f26-4e76-a4b6-09da10523b68/extract-utilities/0.log" Jan 29 12:29:11 crc kubenswrapper[4593]: I0129 12:29:11.732088 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t8n82_5b4febee-8f26-4e76-a4b6-09da10523b68/extract-utilities/0.log" Jan 29 12:29:11 crc kubenswrapper[4593]: I0129 12:29:11.900157 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t8n82_5b4febee-8f26-4e76-a4b6-09da10523b68/extract-content/0.log" Jan 29 12:29:11 crc kubenswrapper[4593]: I0129 12:29:11.960462 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t8n82_5b4febee-8f26-4e76-a4b6-09da10523b68/extract-content/0.log" Jan 29 12:29:12 crc kubenswrapper[4593]: I0129 12:29:12.188736 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t8n82_5b4febee-8f26-4e76-a4b6-09da10523b68/extract-content/0.log" Jan 29 12:29:12 crc kubenswrapper[4593]: I0129 12:29:12.229059 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t8n82_5b4febee-8f26-4e76-a4b6-09da10523b68/extract-utilities/0.log" Jan 29 12:29:12 crc kubenswrapper[4593]: I0129 12:29:12.257811 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t8n82_5b4febee-8f26-4e76-a4b6-09da10523b68/registry-server/0.log" Jan 29 12:29:12 crc kubenswrapper[4593]: I0129 12:29:12.383079 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vbjtl_954251cb-5bea-456e-8d36-27eda2fe92d6/extract-utilities/0.log" Jan 29 12:29:12 crc kubenswrapper[4593]: I0129 12:29:12.536140 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vbjtl_954251cb-5bea-456e-8d36-27eda2fe92d6/extract-content/0.log" Jan 29 12:29:12 crc kubenswrapper[4593]: I0129 12:29:12.572286 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vbjtl_954251cb-5bea-456e-8d36-27eda2fe92d6/extract-content/0.log" Jan 29 12:29:12 crc kubenswrapper[4593]: I0129 12:29:12.603590 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vbjtl_954251cb-5bea-456e-8d36-27eda2fe92d6/extract-utilities/0.log" Jan 29 12:29:12 crc kubenswrapper[4593]: I0129 12:29:12.844924 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vbjtl_954251cb-5bea-456e-8d36-27eda2fe92d6/extract-utilities/0.log" Jan 29 12:29:12 crc kubenswrapper[4593]: I0129 12:29:12.884611 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vbjtl_954251cb-5bea-456e-8d36-27eda2fe92d6/extract-content/0.log" Jan 29 12:29:13 crc kubenswrapper[4593]: I0129 12:29:13.484583 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vbjtl_954251cb-5bea-456e-8d36-27eda2fe92d6/registry-server/0.log" Jan 29 12:29:16 crc kubenswrapper[4593]: I0129 12:29:16.767195 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t8n82" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" probeResult="failure" output=< Jan 29 12:29:16 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:29:16 crc kubenswrapper[4593]: > Jan 29 12:29:18 crc kubenswrapper[4593]: I0129 12:29:18.075219 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:29:18 crc kubenswrapper[4593]: E0129 12:29:18.076430 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:29:26 crc kubenswrapper[4593]: I0129 12:29:26.772844 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t8n82" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" probeResult="failure" output=< Jan 29 12:29:26 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:29:26 crc kubenswrapper[4593]: > Jan 29 12:29:33 crc kubenswrapper[4593]: I0129 12:29:33.074646 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:29:33 crc kubenswrapper[4593]: E0129 12:29:33.075276 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:29:36 crc kubenswrapper[4593]: I0129 12:29:36.788343 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t8n82" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" probeResult="failure" output=< Jan 29 12:29:36 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:29:36 crc kubenswrapper[4593]: > Jan 29 12:29:46 crc kubenswrapper[4593]: I0129 12:29:46.777795 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t8n82" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" probeResult="failure" output=< Jan 29 12:29:46 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:29:46 crc kubenswrapper[4593]: > Jan 29 12:29:48 crc kubenswrapper[4593]: I0129 12:29:48.075779 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:29:48 crc kubenswrapper[4593]: E0129 12:29:48.076124 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:29:56 crc kubenswrapper[4593]: I0129 12:29:56.779426 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t8n82" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" probeResult="failure" output=< Jan 29 12:29:56 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:29:56 crc kubenswrapper[4593]: > Jan 29 12:30:00 crc kubenswrapper[4593]: I0129 12:30:00.177906 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h"] Jan 29 12:30:00 crc kubenswrapper[4593]: I0129 12:30:00.179896 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" Jan 29 12:30:00 crc kubenswrapper[4593]: I0129 12:30:00.185320 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 12:30:00 crc kubenswrapper[4593]: I0129 12:30:00.185611 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 12:30:00 crc kubenswrapper[4593]: I0129 12:30:00.194570 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h"] Jan 29 12:30:00 crc kubenswrapper[4593]: I0129 12:30:00.245428 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04c1b6ee-aa78-4334-b212-4e15c4aceda7-secret-volume\") pod \"collect-profiles-29494830-v265h\" (UID: \"04c1b6ee-aa78-4334-b212-4e15c4aceda7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" Jan 29 12:30:00 crc kubenswrapper[4593]: I0129 12:30:00.245594 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04c1b6ee-aa78-4334-b212-4e15c4aceda7-config-volume\") pod \"collect-profiles-29494830-v265h\" (UID: \"04c1b6ee-aa78-4334-b212-4e15c4aceda7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" Jan 29 12:30:00 crc kubenswrapper[4593]: I0129 12:30:00.245693 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5gt9\" (UniqueName: \"kubernetes.io/projected/04c1b6ee-aa78-4334-b212-4e15c4aceda7-kube-api-access-x5gt9\") pod \"collect-profiles-29494830-v265h\" (UID: \"04c1b6ee-aa78-4334-b212-4e15c4aceda7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" Jan 29 12:30:00 crc kubenswrapper[4593]: I0129 12:30:00.347468 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04c1b6ee-aa78-4334-b212-4e15c4aceda7-secret-volume\") pod \"collect-profiles-29494830-v265h\" (UID: \"04c1b6ee-aa78-4334-b212-4e15c4aceda7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" Jan 29 12:30:00 crc kubenswrapper[4593]: I0129 12:30:00.347601 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04c1b6ee-aa78-4334-b212-4e15c4aceda7-config-volume\") pod \"collect-profiles-29494830-v265h\" (UID: \"04c1b6ee-aa78-4334-b212-4e15c4aceda7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" Jan 29 12:30:00 crc kubenswrapper[4593]: I0129 12:30:00.347650 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5gt9\" (UniqueName: \"kubernetes.io/projected/04c1b6ee-aa78-4334-b212-4e15c4aceda7-kube-api-access-x5gt9\") pod \"collect-profiles-29494830-v265h\" (UID: \"04c1b6ee-aa78-4334-b212-4e15c4aceda7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" Jan 29 12:30:00 crc kubenswrapper[4593]: I0129 12:30:00.348521 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04c1b6ee-aa78-4334-b212-4e15c4aceda7-config-volume\") pod \"collect-profiles-29494830-v265h\" (UID: \"04c1b6ee-aa78-4334-b212-4e15c4aceda7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" Jan 29 12:30:00 crc kubenswrapper[4593]: I0129 12:30:00.356775 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04c1b6ee-aa78-4334-b212-4e15c4aceda7-secret-volume\") pod \"collect-profiles-29494830-v265h\" (UID: \"04c1b6ee-aa78-4334-b212-4e15c4aceda7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" Jan 29 12:30:00 crc kubenswrapper[4593]: I0129 12:30:00.369160 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5gt9\" (UniqueName: \"kubernetes.io/projected/04c1b6ee-aa78-4334-b212-4e15c4aceda7-kube-api-access-x5gt9\") pod \"collect-profiles-29494830-v265h\" (UID: \"04c1b6ee-aa78-4334-b212-4e15c4aceda7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" Jan 29 12:30:00 crc kubenswrapper[4593]: I0129 12:30:00.508487 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" Jan 29 12:30:01 crc kubenswrapper[4593]: I0129 12:30:01.000532 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h"] Jan 29 12:30:01 crc kubenswrapper[4593]: I0129 12:30:01.199567 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" event={"ID":"04c1b6ee-aa78-4334-b212-4e15c4aceda7","Type":"ContainerStarted","Data":"ec4125f9487aabe08bbe0d53076ff552deb919191e3b90e2b41387a971ad58b7"} Jan 29 12:30:02 crc kubenswrapper[4593]: I0129 12:30:02.075614 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:30:02 crc kubenswrapper[4593]: E0129 12:30:02.076182 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:30:02 crc kubenswrapper[4593]: I0129 12:30:02.210136 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" event={"ID":"04c1b6ee-aa78-4334-b212-4e15c4aceda7","Type":"ContainerStarted","Data":"4aaea735207498aaa0a35ad4ef072f20cf4b60e5b44ae473861a8ce70920dc7d"} Jan 29 12:30:02 crc kubenswrapper[4593]: I0129 12:30:02.242160 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" podStartSLOduration=2.24213114 podStartE2EDuration="2.24213114s" podCreationTimestamp="2026-01-29 12:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:30:02.231160385 +0000 UTC m=+5468.104194586" watchObservedRunningTime="2026-01-29 12:30:02.24213114 +0000 UTC m=+5468.115165321" Jan 29 12:30:03 crc kubenswrapper[4593]: I0129 12:30:03.221184 4593 generic.go:334] "Generic (PLEG): container finished" podID="04c1b6ee-aa78-4334-b212-4e15c4aceda7" containerID="4aaea735207498aaa0a35ad4ef072f20cf4b60e5b44ae473861a8ce70920dc7d" exitCode=0 Jan 29 12:30:03 crc kubenswrapper[4593]: I0129 12:30:03.221220 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" event={"ID":"04c1b6ee-aa78-4334-b212-4e15c4aceda7","Type":"ContainerDied","Data":"4aaea735207498aaa0a35ad4ef072f20cf4b60e5b44ae473861a8ce70920dc7d"} Jan 29 12:30:04 crc kubenswrapper[4593]: I0129 12:30:04.611448 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" Jan 29 12:30:04 crc kubenswrapper[4593]: I0129 12:30:04.749287 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5gt9\" (UniqueName: \"kubernetes.io/projected/04c1b6ee-aa78-4334-b212-4e15c4aceda7-kube-api-access-x5gt9\") pod \"04c1b6ee-aa78-4334-b212-4e15c4aceda7\" (UID: \"04c1b6ee-aa78-4334-b212-4e15c4aceda7\") " Jan 29 12:30:04 crc kubenswrapper[4593]: I0129 12:30:04.749389 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04c1b6ee-aa78-4334-b212-4e15c4aceda7-config-volume\") pod \"04c1b6ee-aa78-4334-b212-4e15c4aceda7\" (UID: \"04c1b6ee-aa78-4334-b212-4e15c4aceda7\") " Jan 29 12:30:04 crc kubenswrapper[4593]: I0129 12:30:04.749531 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04c1b6ee-aa78-4334-b212-4e15c4aceda7-secret-volume\") pod \"04c1b6ee-aa78-4334-b212-4e15c4aceda7\" (UID: \"04c1b6ee-aa78-4334-b212-4e15c4aceda7\") " Jan 29 12:30:04 crc kubenswrapper[4593]: I0129 12:30:04.750085 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04c1b6ee-aa78-4334-b212-4e15c4aceda7-config-volume" (OuterVolumeSpecName: "config-volume") pod "04c1b6ee-aa78-4334-b212-4e15c4aceda7" (UID: "04c1b6ee-aa78-4334-b212-4e15c4aceda7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:30:04 crc kubenswrapper[4593]: I0129 12:30:04.750302 4593 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04c1b6ee-aa78-4334-b212-4e15c4aceda7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:30:04 crc kubenswrapper[4593]: I0129 12:30:04.755373 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04c1b6ee-aa78-4334-b212-4e15c4aceda7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "04c1b6ee-aa78-4334-b212-4e15c4aceda7" (UID: "04c1b6ee-aa78-4334-b212-4e15c4aceda7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:30:04 crc kubenswrapper[4593]: I0129 12:30:04.756450 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04c1b6ee-aa78-4334-b212-4e15c4aceda7-kube-api-access-x5gt9" (OuterVolumeSpecName: "kube-api-access-x5gt9") pod "04c1b6ee-aa78-4334-b212-4e15c4aceda7" (UID: "04c1b6ee-aa78-4334-b212-4e15c4aceda7"). InnerVolumeSpecName "kube-api-access-x5gt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:30:04 crc kubenswrapper[4593]: I0129 12:30:04.851508 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5gt9\" (UniqueName: \"kubernetes.io/projected/04c1b6ee-aa78-4334-b212-4e15c4aceda7-kube-api-access-x5gt9\") on node \"crc\" DevicePath \"\"" Jan 29 12:30:04 crc kubenswrapper[4593]: I0129 12:30:04.851549 4593 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04c1b6ee-aa78-4334-b212-4e15c4aceda7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:30:05 crc kubenswrapper[4593]: I0129 12:30:05.285254 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" event={"ID":"04c1b6ee-aa78-4334-b212-4e15c4aceda7","Type":"ContainerDied","Data":"ec4125f9487aabe08bbe0d53076ff552deb919191e3b90e2b41387a971ad58b7"} Jan 29 12:30:05 crc kubenswrapper[4593]: I0129 12:30:05.285319 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec4125f9487aabe08bbe0d53076ff552deb919191e3b90e2b41387a971ad58b7" Jan 29 12:30:05 crc kubenswrapper[4593]: I0129 12:30:05.285372 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" Jan 29 12:30:05 crc kubenswrapper[4593]: I0129 12:30:05.341897 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl"] Jan 29 12:30:05 crc kubenswrapper[4593]: I0129 12:30:05.351385 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl"] Jan 29 12:30:06 crc kubenswrapper[4593]: I0129 12:30:06.800559 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t8n82" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" probeResult="failure" output=< Jan 29 12:30:06 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:30:06 crc kubenswrapper[4593]: > Jan 29 12:30:07 crc kubenswrapper[4593]: I0129 12:30:07.086248 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc4e2861-f7e0-40bb-bb77-b0fdd3498554" path="/var/lib/kubelet/pods/dc4e2861-f7e0-40bb-bb77-b0fdd3498554/volumes" Jan 29 12:30:16 crc kubenswrapper[4593]: I0129 12:30:16.777821 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t8n82" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" probeResult="failure" output=< Jan 29 12:30:16 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:30:16 crc kubenswrapper[4593]: > Jan 29 12:30:17 crc kubenswrapper[4593]: I0129 12:30:17.075510 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:30:17 crc kubenswrapper[4593]: E0129 12:30:17.076179 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:30:17 crc kubenswrapper[4593]: I0129 12:30:17.680747 4593 scope.go:117] "RemoveContainer" containerID="774b5de0fbc462ffcb1b94ee57144a8198c30add9d0ae3a9eee99f2a26a14b82" Jan 29 12:30:26 crc kubenswrapper[4593]: I0129 12:30:26.790761 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t8n82" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" probeResult="failure" output=< Jan 29 12:30:26 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:30:26 crc kubenswrapper[4593]: > Jan 29 12:30:26 crc kubenswrapper[4593]: I0129 12:30:26.791286 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:30:26 crc kubenswrapper[4593]: I0129 12:30:26.792050 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"00ff006605dd6dc0baa5b63261f7a4ff3fef69362f56ba2ac014140ec83c7278"} pod="openshift-marketplace/redhat-operators-t8n82" containerMessage="Container registry-server failed startup probe, will be restarted" Jan 29 12:30:26 crc kubenswrapper[4593]: I0129 12:30:26.792089 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t8n82" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" containerID="cri-o://00ff006605dd6dc0baa5b63261f7a4ff3fef69362f56ba2ac014140ec83c7278" gracePeriod=30 Jan 29 12:30:28 crc kubenswrapper[4593]: I0129 12:30:28.088157 4593 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 12:30:28 crc kubenswrapper[4593]: I0129 12:30:28.507034 4593 generic.go:334] "Generic (PLEG): container finished" podID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerID="00ff006605dd6dc0baa5b63261f7a4ff3fef69362f56ba2ac014140ec83c7278" exitCode=0 Jan 29 12:30:28 crc kubenswrapper[4593]: I0129 12:30:28.507085 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8n82" event={"ID":"5b4febee-8f26-4e76-a4b6-09da10523b68","Type":"ContainerDied","Data":"00ff006605dd6dc0baa5b63261f7a4ff3fef69362f56ba2ac014140ec83c7278"} Jan 29 12:30:29 crc kubenswrapper[4593]: I0129 12:30:29.074889 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:30:29 crc kubenswrapper[4593]: E0129 12:30:29.075591 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:30:29 crc kubenswrapper[4593]: I0129 12:30:29.520705 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8n82" event={"ID":"5b4febee-8f26-4e76-a4b6-09da10523b68","Type":"ContainerStarted","Data":"842e35762b01a487fdff904d5d4a2263642ba451df60d98e32104c2eb4869908"} Jan 29 12:30:35 crc kubenswrapper[4593]: I0129 12:30:35.720215 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:30:35 crc kubenswrapper[4593]: I0129 12:30:35.722302 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:30:36 crc kubenswrapper[4593]: I0129 12:30:36.778666 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t8n82" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" probeResult="failure" output=< Jan 29 12:30:36 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:30:36 crc kubenswrapper[4593]: > Jan 29 12:30:42 crc kubenswrapper[4593]: I0129 12:30:42.076513 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:30:42 crc kubenswrapper[4593]: E0129 12:30:42.077175 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:30:45 crc kubenswrapper[4593]: I0129 12:30:45.790114 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:30:45 crc kubenswrapper[4593]: I0129 12:30:45.847244 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:30:46 crc kubenswrapper[4593]: I0129 12:30:46.034726 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t8n82"] Jan 29 12:30:47 crc kubenswrapper[4593]: I0129 12:30:47.699211 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t8n82" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" containerID="cri-o://842e35762b01a487fdff904d5d4a2263642ba451df60d98e32104c2eb4869908" gracePeriod=2 Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:48.753334 4593 generic.go:334] "Generic (PLEG): container finished" podID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerID="842e35762b01a487fdff904d5d4a2263642ba451df60d98e32104c2eb4869908" exitCode=0 Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:48.753725 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8n82" event={"ID":"5b4febee-8f26-4e76-a4b6-09da10523b68","Type":"ContainerDied","Data":"842e35762b01a487fdff904d5d4a2263642ba451df60d98e32104c2eb4869908"} Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:48.753767 4593 scope.go:117] "RemoveContainer" containerID="00ff006605dd6dc0baa5b63261f7a4ff3fef69362f56ba2ac014140ec83c7278" Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:48.928408 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:49.042292 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b4febee-8f26-4e76-a4b6-09da10523b68-utilities\") pod \"5b4febee-8f26-4e76-a4b6-09da10523b68\" (UID: \"5b4febee-8f26-4e76-a4b6-09da10523b68\") " Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:49.042475 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm2nc\" (UniqueName: \"kubernetes.io/projected/5b4febee-8f26-4e76-a4b6-09da10523b68-kube-api-access-lm2nc\") pod \"5b4febee-8f26-4e76-a4b6-09da10523b68\" (UID: \"5b4febee-8f26-4e76-a4b6-09da10523b68\") " Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:49.042520 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b4febee-8f26-4e76-a4b6-09da10523b68-catalog-content\") pod \"5b4febee-8f26-4e76-a4b6-09da10523b68\" (UID: \"5b4febee-8f26-4e76-a4b6-09da10523b68\") " Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:49.051401 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b4febee-8f26-4e76-a4b6-09da10523b68-utilities" (OuterVolumeSpecName: "utilities") pod "5b4febee-8f26-4e76-a4b6-09da10523b68" (UID: "5b4febee-8f26-4e76-a4b6-09da10523b68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:49.051989 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b4febee-8f26-4e76-a4b6-09da10523b68-kube-api-access-lm2nc" (OuterVolumeSpecName: "kube-api-access-lm2nc") pod "5b4febee-8f26-4e76-a4b6-09da10523b68" (UID: "5b4febee-8f26-4e76-a4b6-09da10523b68"). InnerVolumeSpecName "kube-api-access-lm2nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:49.145646 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm2nc\" (UniqueName: \"kubernetes.io/projected/5b4febee-8f26-4e76-a4b6-09da10523b68-kube-api-access-lm2nc\") on node \"crc\" DevicePath \"\"" Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:49.145671 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b4febee-8f26-4e76-a4b6-09da10523b68-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:49.194389 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b4febee-8f26-4e76-a4b6-09da10523b68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b4febee-8f26-4e76-a4b6-09da10523b68" (UID: "5b4febee-8f26-4e76-a4b6-09da10523b68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:49.247391 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b4febee-8f26-4e76-a4b6-09da10523b68-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:49.774032 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8n82" event={"ID":"5b4febee-8f26-4e76-a4b6-09da10523b68","Type":"ContainerDied","Data":"f009cbeccec362360001e7cb5c502e81a1edd3147f1f8aade495c66564bbfd8c"} Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:49.774085 4593 scope.go:117] "RemoveContainer" containerID="842e35762b01a487fdff904d5d4a2263642ba451df60d98e32104c2eb4869908" Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:49.774099 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:49.796558 4593 scope.go:117] "RemoveContainer" containerID="cd45d0278c2bcae6a565207daa122f821a4b42623055e66d7bdb3205bf89dcd8" Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:49.841004 4593 scope.go:117] "RemoveContainer" containerID="326801ee869f568d18145038bfc3feeb923901fc80f9ebe2dd1bfa5dfa227fba" Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:49.841153 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t8n82"] Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:49.850977 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t8n82"] Jan 29 12:30:51 crc kubenswrapper[4593]: I0129 12:30:51.085832 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" path="/var/lib/kubelet/pods/5b4febee-8f26-4e76-a4b6-09da10523b68/volumes" Jan 29 12:30:55 crc kubenswrapper[4593]: I0129 12:30:55.082981 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:30:55 crc kubenswrapper[4593]: E0129 12:30:55.084019 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:31:09 crc kubenswrapper[4593]: I0129 12:31:09.074977 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:31:09 crc kubenswrapper[4593]: E0129 12:31:09.075731 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:31:17 crc kubenswrapper[4593]: I0129 12:31:17.765346 4593 scope.go:117] "RemoveContainer" containerID="b492a7dd406b0c27babd0f943ac62c7e59cd70af84483b5b682c1f16e22a9e9e" Jan 29 12:31:20 crc kubenswrapper[4593]: I0129 12:31:20.075560 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:31:20 crc kubenswrapper[4593]: E0129 12:31:20.076221 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:31:33 crc kubenswrapper[4593]: I0129 12:31:33.080555 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:31:33 crc kubenswrapper[4593]: E0129 12:31:33.081442 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:31:47 crc kubenswrapper[4593]: I0129 12:31:47.083987 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:31:47 crc kubenswrapper[4593]: E0129 12:31:47.085089 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.078881 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5l2gk"] Jan 29 12:31:50 crc kubenswrapper[4593]: E0129 12:31:50.079869 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04c1b6ee-aa78-4334-b212-4e15c4aceda7" containerName="collect-profiles" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.079902 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="04c1b6ee-aa78-4334-b212-4e15c4aceda7" containerName="collect-profiles" Jan 29 12:31:50 crc kubenswrapper[4593]: E0129 12:31:50.079950 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.079958 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" Jan 29 12:31:50 crc kubenswrapper[4593]: E0129 12:31:50.079970 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="extract-utilities" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.079978 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="extract-utilities" Jan 29 12:31:50 crc kubenswrapper[4593]: E0129 12:31:50.079997 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="extract-content" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.080004 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="extract-content" Jan 29 12:31:50 crc kubenswrapper[4593]: E0129 12:31:50.080024 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.080032 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.080306 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="04c1b6ee-aa78-4334-b212-4e15c4aceda7" containerName="collect-profiles" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.080618 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.080651 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.082511 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.094509 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5l2gk"] Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.206461 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-catalog-content\") pod \"certified-operators-5l2gk\" (UID: \"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df\") " pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.206597 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zjwt\" (UniqueName: \"kubernetes.io/projected/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-kube-api-access-6zjwt\") pod \"certified-operators-5l2gk\" (UID: \"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df\") " pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.206663 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-utilities\") pod \"certified-operators-5l2gk\" (UID: \"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df\") " pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.308867 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-utilities\") pod \"certified-operators-5l2gk\" (UID: \"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df\") " pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.309103 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-catalog-content\") pod \"certified-operators-5l2gk\" (UID: \"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df\") " pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.309184 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zjwt\" (UniqueName: \"kubernetes.io/projected/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-kube-api-access-6zjwt\") pod \"certified-operators-5l2gk\" (UID: \"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df\") " pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.309452 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-utilities\") pod \"certified-operators-5l2gk\" (UID: \"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df\") " pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.309597 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-catalog-content\") pod \"certified-operators-5l2gk\" (UID: \"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df\") " pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.332922 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zjwt\" (UniqueName: \"kubernetes.io/projected/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-kube-api-access-6zjwt\") pod \"certified-operators-5l2gk\" (UID: \"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df\") " pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.421169 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.727057 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5l2gk"] Jan 29 12:31:51 crc kubenswrapper[4593]: I0129 12:31:51.331017 4593 generic.go:334] "Generic (PLEG): container finished" podID="5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df" containerID="b3276f0e5aa2ffa94751a44f64dd12fe7ecb48344985fe6e93e729e1ba9090bb" exitCode=0 Jan 29 12:31:51 crc kubenswrapper[4593]: I0129 12:31:51.331138 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5l2gk" event={"ID":"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df","Type":"ContainerDied","Data":"b3276f0e5aa2ffa94751a44f64dd12fe7ecb48344985fe6e93e729e1ba9090bb"} Jan 29 12:31:51 crc kubenswrapper[4593]: I0129 12:31:51.331335 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5l2gk" event={"ID":"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df","Type":"ContainerStarted","Data":"bf4434e5b035dba180315d3cb2ea4eca8d32e33cde7fe6fc465316c9c9d37f6c"} Jan 29 12:31:53 crc kubenswrapper[4593]: I0129 12:31:53.382071 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5l2gk" event={"ID":"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df","Type":"ContainerStarted","Data":"81223e57951a2e3b93d80c9f2820055849f57c26f562e22e5abeba878ada1651"} Jan 29 12:31:57 crc kubenswrapper[4593]: I0129 12:31:57.437557 4593 generic.go:334] "Generic (PLEG): container finished" podID="5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df" containerID="81223e57951a2e3b93d80c9f2820055849f57c26f562e22e5abeba878ada1651" exitCode=0 Jan 29 12:31:57 crc kubenswrapper[4593]: I0129 12:31:57.437669 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5l2gk" event={"ID":"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df","Type":"ContainerDied","Data":"81223e57951a2e3b93d80c9f2820055849f57c26f562e22e5abeba878ada1651"} Jan 29 12:31:58 crc kubenswrapper[4593]: I0129 12:31:58.076919 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:31:58 crc kubenswrapper[4593]: E0129 12:31:58.077157 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:31:59 crc kubenswrapper[4593]: I0129 12:31:59.461718 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5l2gk" event={"ID":"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df","Type":"ContainerStarted","Data":"20a5a1bd0651aa7ac36b9a7d8d87d0220769b3d4033f80422ddb9f134b6a4d25"} Jan 29 12:31:59 crc kubenswrapper[4593]: I0129 12:31:59.490942 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5l2gk" podStartSLOduration=2.166959243 podStartE2EDuration="9.490895669s" podCreationTimestamp="2026-01-29 12:31:50 +0000 UTC" firstStartedPulling="2026-01-29 12:31:51.332436722 +0000 UTC m=+5577.205470913" lastFinishedPulling="2026-01-29 12:31:58.656373148 +0000 UTC m=+5584.529407339" observedRunningTime="2026-01-29 12:31:59.484341541 +0000 UTC m=+5585.357375742" watchObservedRunningTime="2026-01-29 12:31:59.490895669 +0000 UTC m=+5585.363929870" Jan 29 12:32:00 crc kubenswrapper[4593]: I0129 12:32:00.421512 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:32:00 crc kubenswrapper[4593]: I0129 12:32:00.421675 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:32:01 crc kubenswrapper[4593]: I0129 12:32:01.467584 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-5l2gk" podUID="5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df" containerName="registry-server" probeResult="failure" output=< Jan 29 12:32:01 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:32:01 crc kubenswrapper[4593]: > Jan 29 12:32:03 crc kubenswrapper[4593]: I0129 12:32:03.503060 4593 generic.go:334] "Generic (PLEG): container finished" podID="65f07111-44a8-402c-887e-fb65ab51a2ba" containerID="de71b4032d10072bd82e38895c6203cec0fc48ffa350c02731e705e0242d4fee" exitCode=0 Jan 29 12:32:03 crc kubenswrapper[4593]: I0129 12:32:03.503692 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dw4s4/must-gather-vjpbp" event={"ID":"65f07111-44a8-402c-887e-fb65ab51a2ba","Type":"ContainerDied","Data":"de71b4032d10072bd82e38895c6203cec0fc48ffa350c02731e705e0242d4fee"} Jan 29 12:32:03 crc kubenswrapper[4593]: I0129 12:32:03.504413 4593 scope.go:117] "RemoveContainer" containerID="de71b4032d10072bd82e38895c6203cec0fc48ffa350c02731e705e0242d4fee" Jan 29 12:32:03 crc kubenswrapper[4593]: I0129 12:32:03.756347 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dw4s4_must-gather-vjpbp_65f07111-44a8-402c-887e-fb65ab51a2ba/gather/0.log" Jan 29 12:32:10 crc kubenswrapper[4593]: I0129 12:32:10.478578 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:32:10 crc kubenswrapper[4593]: I0129 12:32:10.529602 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:32:10 crc kubenswrapper[4593]: I0129 12:32:10.742997 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5l2gk"] Jan 29 12:32:11 crc kubenswrapper[4593]: I0129 12:32:11.578319 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5l2gk" podUID="5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df" containerName="registry-server" containerID="cri-o://20a5a1bd0651aa7ac36b9a7d8d87d0220769b3d4033f80422ddb9f134b6a4d25" gracePeriod=2 Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.054750 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.074939 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.147380 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-utilities\") pod \"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df\" (UID: \"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df\") " Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.147896 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-catalog-content\") pod \"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df\" (UID: \"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df\") " Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.148134 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zjwt\" (UniqueName: \"kubernetes.io/projected/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-kube-api-access-6zjwt\") pod \"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df\" (UID: \"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df\") " Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.149086 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-utilities" (OuterVolumeSpecName: "utilities") pod "5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df" (UID: "5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.150835 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.155871 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-kube-api-access-6zjwt" (OuterVolumeSpecName: "kube-api-access-6zjwt") pod "5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df" (UID: "5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df"). InnerVolumeSpecName "kube-api-access-6zjwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.212316 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df" (UID: "5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.252735 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.252773 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zjwt\" (UniqueName: \"kubernetes.io/projected/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-kube-api-access-6zjwt\") on node \"crc\" DevicePath \"\"" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.593034 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"f8b1c574af947fa11ffe9b5caa5a417f8805b37c95e5b710480d0cd19a6f323f"} Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.603496 4593 generic.go:334] "Generic (PLEG): container finished" podID="5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df" containerID="20a5a1bd0651aa7ac36b9a7d8d87d0220769b3d4033f80422ddb9f134b6a4d25" exitCode=0 Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.603555 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5l2gk" event={"ID":"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df","Type":"ContainerDied","Data":"20a5a1bd0651aa7ac36b9a7d8d87d0220769b3d4033f80422ddb9f134b6a4d25"} Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.603594 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5l2gk" event={"ID":"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df","Type":"ContainerDied","Data":"bf4434e5b035dba180315d3cb2ea4eca8d32e33cde7fe6fc465316c9c9d37f6c"} Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.603616 4593 scope.go:117] "RemoveContainer" containerID="20a5a1bd0651aa7ac36b9a7d8d87d0220769b3d4033f80422ddb9f134b6a4d25" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.604033 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.667749 4593 scope.go:117] "RemoveContainer" containerID="81223e57951a2e3b93d80c9f2820055849f57c26f562e22e5abeba878ada1651" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.682735 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5l2gk"] Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.701506 4593 scope.go:117] "RemoveContainer" containerID="b3276f0e5aa2ffa94751a44f64dd12fe7ecb48344985fe6e93e729e1ba9090bb" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.733044 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5l2gk"] Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.762280 4593 scope.go:117] "RemoveContainer" containerID="20a5a1bd0651aa7ac36b9a7d8d87d0220769b3d4033f80422ddb9f134b6a4d25" Jan 29 12:32:12 crc kubenswrapper[4593]: E0129 12:32:12.763029 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20a5a1bd0651aa7ac36b9a7d8d87d0220769b3d4033f80422ddb9f134b6a4d25\": container with ID starting with 20a5a1bd0651aa7ac36b9a7d8d87d0220769b3d4033f80422ddb9f134b6a4d25 not found: ID does not exist" containerID="20a5a1bd0651aa7ac36b9a7d8d87d0220769b3d4033f80422ddb9f134b6a4d25" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.763070 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20a5a1bd0651aa7ac36b9a7d8d87d0220769b3d4033f80422ddb9f134b6a4d25"} err="failed to get container status \"20a5a1bd0651aa7ac36b9a7d8d87d0220769b3d4033f80422ddb9f134b6a4d25\": rpc error: code = NotFound desc = could not find container \"20a5a1bd0651aa7ac36b9a7d8d87d0220769b3d4033f80422ddb9f134b6a4d25\": container with ID starting with 20a5a1bd0651aa7ac36b9a7d8d87d0220769b3d4033f80422ddb9f134b6a4d25 not found: ID does not exist" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.763098 4593 scope.go:117] "RemoveContainer" containerID="81223e57951a2e3b93d80c9f2820055849f57c26f562e22e5abeba878ada1651" Jan 29 12:32:12 crc kubenswrapper[4593]: E0129 12:32:12.763390 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81223e57951a2e3b93d80c9f2820055849f57c26f562e22e5abeba878ada1651\": container with ID starting with 81223e57951a2e3b93d80c9f2820055849f57c26f562e22e5abeba878ada1651 not found: ID does not exist" containerID="81223e57951a2e3b93d80c9f2820055849f57c26f562e22e5abeba878ada1651" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.763412 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81223e57951a2e3b93d80c9f2820055849f57c26f562e22e5abeba878ada1651"} err="failed to get container status \"81223e57951a2e3b93d80c9f2820055849f57c26f562e22e5abeba878ada1651\": rpc error: code = NotFound desc = could not find container \"81223e57951a2e3b93d80c9f2820055849f57c26f562e22e5abeba878ada1651\": container with ID starting with 81223e57951a2e3b93d80c9f2820055849f57c26f562e22e5abeba878ada1651 not found: ID does not exist" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.763426 4593 scope.go:117] "RemoveContainer" containerID="b3276f0e5aa2ffa94751a44f64dd12fe7ecb48344985fe6e93e729e1ba9090bb" Jan 29 12:32:12 crc kubenswrapper[4593]: E0129 12:32:12.763960 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3276f0e5aa2ffa94751a44f64dd12fe7ecb48344985fe6e93e729e1ba9090bb\": container with ID starting with b3276f0e5aa2ffa94751a44f64dd12fe7ecb48344985fe6e93e729e1ba9090bb not found: ID does not exist" containerID="b3276f0e5aa2ffa94751a44f64dd12fe7ecb48344985fe6e93e729e1ba9090bb" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.763983 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3276f0e5aa2ffa94751a44f64dd12fe7ecb48344985fe6e93e729e1ba9090bb"} err="failed to get container status \"b3276f0e5aa2ffa94751a44f64dd12fe7ecb48344985fe6e93e729e1ba9090bb\": rpc error: code = NotFound desc = could not find container \"b3276f0e5aa2ffa94751a44f64dd12fe7ecb48344985fe6e93e729e1ba9090bb\": container with ID starting with b3276f0e5aa2ffa94751a44f64dd12fe7ecb48344985fe6e93e729e1ba9090bb not found: ID does not exist" Jan 29 12:32:13 crc kubenswrapper[4593]: I0129 12:32:13.087881 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df" path="/var/lib/kubelet/pods/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df/volumes" Jan 29 12:32:17 crc kubenswrapper[4593]: I0129 12:32:17.862983 4593 scope.go:117] "RemoveContainer" containerID="1c377ca355fa720f0d286a362dd30108927c61a24acc46c9847397398d91107e" Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.156583 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dw4s4/must-gather-vjpbp"] Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.156962 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-dw4s4/must-gather-vjpbp" podUID="65f07111-44a8-402c-887e-fb65ab51a2ba" containerName="copy" containerID="cri-o://1feec9852be62edc7f198220f764a5c74cb5410083acfe510ab8aa789824da8a" gracePeriod=2 Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.165156 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dw4s4/must-gather-vjpbp"] Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.591256 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dw4s4_must-gather-vjpbp_65f07111-44a8-402c-887e-fb65ab51a2ba/copy/0.log" Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.592375 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dw4s4/must-gather-vjpbp" Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.660131 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dw4s4_must-gather-vjpbp_65f07111-44a8-402c-887e-fb65ab51a2ba/copy/0.log" Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.660666 4593 generic.go:334] "Generic (PLEG): container finished" podID="65f07111-44a8-402c-887e-fb65ab51a2ba" containerID="1feec9852be62edc7f198220f764a5c74cb5410083acfe510ab8aa789824da8a" exitCode=143 Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.660769 4593 scope.go:117] "RemoveContainer" containerID="1feec9852be62edc7f198220f764a5c74cb5410083acfe510ab8aa789824da8a" Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.660776 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dw4s4/must-gather-vjpbp" Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.678761 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mslvw\" (UniqueName: \"kubernetes.io/projected/65f07111-44a8-402c-887e-fb65ab51a2ba-kube-api-access-mslvw\") pod \"65f07111-44a8-402c-887e-fb65ab51a2ba\" (UID: \"65f07111-44a8-402c-887e-fb65ab51a2ba\") " Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.679148 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/65f07111-44a8-402c-887e-fb65ab51a2ba-must-gather-output\") pod \"65f07111-44a8-402c-887e-fb65ab51a2ba\" (UID: \"65f07111-44a8-402c-887e-fb65ab51a2ba\") " Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.691168 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65f07111-44a8-402c-887e-fb65ab51a2ba-kube-api-access-mslvw" (OuterVolumeSpecName: "kube-api-access-mslvw") pod "65f07111-44a8-402c-887e-fb65ab51a2ba" (UID: "65f07111-44a8-402c-887e-fb65ab51a2ba"). InnerVolumeSpecName "kube-api-access-mslvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.701689 4593 scope.go:117] "RemoveContainer" containerID="de71b4032d10072bd82e38895c6203cec0fc48ffa350c02731e705e0242d4fee" Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.769430 4593 scope.go:117] "RemoveContainer" containerID="1feec9852be62edc7f198220f764a5c74cb5410083acfe510ab8aa789824da8a" Jan 29 12:32:18 crc kubenswrapper[4593]: E0129 12:32:18.771648 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1feec9852be62edc7f198220f764a5c74cb5410083acfe510ab8aa789824da8a\": container with ID starting with 1feec9852be62edc7f198220f764a5c74cb5410083acfe510ab8aa789824da8a not found: ID does not exist" containerID="1feec9852be62edc7f198220f764a5c74cb5410083acfe510ab8aa789824da8a" Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.771686 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1feec9852be62edc7f198220f764a5c74cb5410083acfe510ab8aa789824da8a"} err="failed to get container status \"1feec9852be62edc7f198220f764a5c74cb5410083acfe510ab8aa789824da8a\": rpc error: code = NotFound desc = could not find container \"1feec9852be62edc7f198220f764a5c74cb5410083acfe510ab8aa789824da8a\": container with ID starting with 1feec9852be62edc7f198220f764a5c74cb5410083acfe510ab8aa789824da8a not found: ID does not exist" Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.771708 4593 scope.go:117] "RemoveContainer" containerID="de71b4032d10072bd82e38895c6203cec0fc48ffa350c02731e705e0242d4fee" Jan 29 12:32:18 crc kubenswrapper[4593]: E0129 12:32:18.772161 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de71b4032d10072bd82e38895c6203cec0fc48ffa350c02731e705e0242d4fee\": container with ID starting with de71b4032d10072bd82e38895c6203cec0fc48ffa350c02731e705e0242d4fee not found: ID does not exist" containerID="de71b4032d10072bd82e38895c6203cec0fc48ffa350c02731e705e0242d4fee" Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.772295 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de71b4032d10072bd82e38895c6203cec0fc48ffa350c02731e705e0242d4fee"} err="failed to get container status \"de71b4032d10072bd82e38895c6203cec0fc48ffa350c02731e705e0242d4fee\": rpc error: code = NotFound desc = could not find container \"de71b4032d10072bd82e38895c6203cec0fc48ffa350c02731e705e0242d4fee\": container with ID starting with de71b4032d10072bd82e38895c6203cec0fc48ffa350c02731e705e0242d4fee not found: ID does not exist" Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.781697 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mslvw\" (UniqueName: \"kubernetes.io/projected/65f07111-44a8-402c-887e-fb65ab51a2ba-kube-api-access-mslvw\") on node \"crc\" DevicePath \"\"" Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.920254 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65f07111-44a8-402c-887e-fb65ab51a2ba-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "65f07111-44a8-402c-887e-fb65ab51a2ba" (UID: "65f07111-44a8-402c-887e-fb65ab51a2ba"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.985965 4593 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/65f07111-44a8-402c-887e-fb65ab51a2ba-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 29 12:32:19 crc kubenswrapper[4593]: I0129 12:32:19.087271 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65f07111-44a8-402c-887e-fb65ab51a2ba" path="/var/lib/kubelet/pods/65f07111-44a8-402c-887e-fb65ab51a2ba/volumes" Jan 29 12:34:33 crc kubenswrapper[4593]: I0129 12:34:33.945691 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:34:33 crc kubenswrapper[4593]: I0129 12:34:33.946309 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:34:46 crc kubenswrapper[4593]: I0129 12:34:46.931915 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c2lqd"] Jan 29 12:34:46 crc kubenswrapper[4593]: E0129 12:34:46.932828 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df" containerName="extract-content" Jan 29 12:34:46 crc kubenswrapper[4593]: I0129 12:34:46.932841 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df" containerName="extract-content" Jan 29 12:34:46 crc kubenswrapper[4593]: E0129 12:34:46.932855 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f07111-44a8-402c-887e-fb65ab51a2ba" containerName="gather" Jan 29 12:34:46 crc kubenswrapper[4593]: I0129 12:34:46.932862 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f07111-44a8-402c-887e-fb65ab51a2ba" containerName="gather" Jan 29 12:34:46 crc kubenswrapper[4593]: E0129 12:34:46.932873 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df" containerName="registry-server" Jan 29 12:34:46 crc kubenswrapper[4593]: I0129 12:34:46.932883 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df" containerName="registry-server" Jan 29 12:34:46 crc kubenswrapper[4593]: E0129 12:34:46.932897 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df" containerName="extract-utilities" Jan 29 12:34:46 crc kubenswrapper[4593]: I0129 12:34:46.932907 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df" containerName="extract-utilities" Jan 29 12:34:46 crc kubenswrapper[4593]: E0129 12:34:46.932944 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f07111-44a8-402c-887e-fb65ab51a2ba" containerName="copy" Jan 29 12:34:46 crc kubenswrapper[4593]: I0129 12:34:46.932950 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f07111-44a8-402c-887e-fb65ab51a2ba" containerName="copy" Jan 29 12:34:46 crc kubenswrapper[4593]: I0129 12:34:46.933143 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df" containerName="registry-server" Jan 29 12:34:46 crc kubenswrapper[4593]: I0129 12:34:46.933158 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="65f07111-44a8-402c-887e-fb65ab51a2ba" containerName="copy" Jan 29 12:34:46 crc kubenswrapper[4593]: I0129 12:34:46.933176 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="65f07111-44a8-402c-887e-fb65ab51a2ba" containerName="gather" Jan 29 12:34:46 crc kubenswrapper[4593]: I0129 12:34:46.934462 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:34:46 crc kubenswrapper[4593]: I0129 12:34:46.961367 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c2lqd"] Jan 29 12:34:46 crc kubenswrapper[4593]: I0129 12:34:46.965269 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/092caf89-afd5-4bc4-aa5b-afa0b8583122-catalog-content\") pod \"community-operators-c2lqd\" (UID: \"092caf89-afd5-4bc4-aa5b-afa0b8583122\") " pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:34:46 crc kubenswrapper[4593]: I0129 12:34:46.965321 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrss7\" (UniqueName: \"kubernetes.io/projected/092caf89-afd5-4bc4-aa5b-afa0b8583122-kube-api-access-wrss7\") pod \"community-operators-c2lqd\" (UID: \"092caf89-afd5-4bc4-aa5b-afa0b8583122\") " pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:34:46 crc kubenswrapper[4593]: I0129 12:34:46.965376 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/092caf89-afd5-4bc4-aa5b-afa0b8583122-utilities\") pod \"community-operators-c2lqd\" (UID: \"092caf89-afd5-4bc4-aa5b-afa0b8583122\") " pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:34:47 crc kubenswrapper[4593]: I0129 12:34:47.066910 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/092caf89-afd5-4bc4-aa5b-afa0b8583122-catalog-content\") pod \"community-operators-c2lqd\" (UID: \"092caf89-afd5-4bc4-aa5b-afa0b8583122\") " pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:34:47 crc kubenswrapper[4593]: I0129 12:34:47.067250 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrss7\" (UniqueName: \"kubernetes.io/projected/092caf89-afd5-4bc4-aa5b-afa0b8583122-kube-api-access-wrss7\") pod \"community-operators-c2lqd\" (UID: \"092caf89-afd5-4bc4-aa5b-afa0b8583122\") " pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:34:47 crc kubenswrapper[4593]: I0129 12:34:47.067331 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/092caf89-afd5-4bc4-aa5b-afa0b8583122-utilities\") pod \"community-operators-c2lqd\" (UID: \"092caf89-afd5-4bc4-aa5b-afa0b8583122\") " pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:34:47 crc kubenswrapper[4593]: I0129 12:34:47.067404 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/092caf89-afd5-4bc4-aa5b-afa0b8583122-catalog-content\") pod \"community-operators-c2lqd\" (UID: \"092caf89-afd5-4bc4-aa5b-afa0b8583122\") " pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:34:47 crc kubenswrapper[4593]: I0129 12:34:47.067688 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/092caf89-afd5-4bc4-aa5b-afa0b8583122-utilities\") pod \"community-operators-c2lqd\" (UID: \"092caf89-afd5-4bc4-aa5b-afa0b8583122\") " pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:34:47 crc kubenswrapper[4593]: I0129 12:34:47.091959 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrss7\" (UniqueName: \"kubernetes.io/projected/092caf89-afd5-4bc4-aa5b-afa0b8583122-kube-api-access-wrss7\") pod \"community-operators-c2lqd\" (UID: \"092caf89-afd5-4bc4-aa5b-afa0b8583122\") " pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:34:47 crc kubenswrapper[4593]: I0129 12:34:47.258542 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:34:47 crc kubenswrapper[4593]: I0129 12:34:47.738407 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c2lqd"] Jan 29 12:34:47 crc kubenswrapper[4593]: I0129 12:34:47.939741 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hkpl9"] Jan 29 12:34:47 crc kubenswrapper[4593]: I0129 12:34:47.942024 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:34:47 crc kubenswrapper[4593]: I0129 12:34:47.950290 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkpl9"] Jan 29 12:34:48 crc kubenswrapper[4593]: I0129 12:34:48.108012 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkjkq\" (UniqueName: \"kubernetes.io/projected/dfd19db0-a9c1-4aa7-a665-957e97ca991e-kube-api-access-nkjkq\") pod \"redhat-marketplace-hkpl9\" (UID: \"dfd19db0-a9c1-4aa7-a665-957e97ca991e\") " pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:34:48 crc kubenswrapper[4593]: I0129 12:34:48.109159 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfd19db0-a9c1-4aa7-a665-957e97ca991e-catalog-content\") pod \"redhat-marketplace-hkpl9\" (UID: \"dfd19db0-a9c1-4aa7-a665-957e97ca991e\") " pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:34:48 crc kubenswrapper[4593]: I0129 12:34:48.109413 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfd19db0-a9c1-4aa7-a665-957e97ca991e-utilities\") pod \"redhat-marketplace-hkpl9\" (UID: \"dfd19db0-a9c1-4aa7-a665-957e97ca991e\") " pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:34:48 crc kubenswrapper[4593]: I0129 12:34:48.211490 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfd19db0-a9c1-4aa7-a665-957e97ca991e-catalog-content\") pod \"redhat-marketplace-hkpl9\" (UID: \"dfd19db0-a9c1-4aa7-a665-957e97ca991e\") " pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:34:48 crc kubenswrapper[4593]: I0129 12:34:48.212020 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfd19db0-a9c1-4aa7-a665-957e97ca991e-catalog-content\") pod \"redhat-marketplace-hkpl9\" (UID: \"dfd19db0-a9c1-4aa7-a665-957e97ca991e\") " pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:34:48 crc kubenswrapper[4593]: I0129 12:34:48.212237 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfd19db0-a9c1-4aa7-a665-957e97ca991e-utilities\") pod \"redhat-marketplace-hkpl9\" (UID: \"dfd19db0-a9c1-4aa7-a665-957e97ca991e\") " pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:34:48 crc kubenswrapper[4593]: I0129 12:34:48.213165 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkjkq\" (UniqueName: \"kubernetes.io/projected/dfd19db0-a9c1-4aa7-a665-957e97ca991e-kube-api-access-nkjkq\") pod \"redhat-marketplace-hkpl9\" (UID: \"dfd19db0-a9c1-4aa7-a665-957e97ca991e\") " pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:34:48 crc kubenswrapper[4593]: I0129 12:34:48.213168 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfd19db0-a9c1-4aa7-a665-957e97ca991e-utilities\") pod \"redhat-marketplace-hkpl9\" (UID: \"dfd19db0-a9c1-4aa7-a665-957e97ca991e\") " pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:34:48 crc kubenswrapper[4593]: I0129 12:34:48.239946 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkjkq\" (UniqueName: \"kubernetes.io/projected/dfd19db0-a9c1-4aa7-a665-957e97ca991e-kube-api-access-nkjkq\") pod \"redhat-marketplace-hkpl9\" (UID: \"dfd19db0-a9c1-4aa7-a665-957e97ca991e\") " pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:34:48 crc kubenswrapper[4593]: I0129 12:34:48.256107 4593 generic.go:334] "Generic (PLEG): container finished" podID="092caf89-afd5-4bc4-aa5b-afa0b8583122" containerID="0e99862554d61960d63664a42ceb2a683f70b91d8ed18fd30cacaf30e90da0e5" exitCode=0 Jan 29 12:34:48 crc kubenswrapper[4593]: I0129 12:34:48.256156 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2lqd" event={"ID":"092caf89-afd5-4bc4-aa5b-afa0b8583122","Type":"ContainerDied","Data":"0e99862554d61960d63664a42ceb2a683f70b91d8ed18fd30cacaf30e90da0e5"} Jan 29 12:34:48 crc kubenswrapper[4593]: I0129 12:34:48.256185 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2lqd" event={"ID":"092caf89-afd5-4bc4-aa5b-afa0b8583122","Type":"ContainerStarted","Data":"925fae481b629ccb1893d79864a8245208c10343beb67fe181c165267988eb8c"} Jan 29 12:34:48 crc kubenswrapper[4593]: I0129 12:34:48.351253 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:34:48 crc kubenswrapper[4593]: I0129 12:34:48.864261 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkpl9"] Jan 29 12:34:48 crc kubenswrapper[4593]: W0129 12:34:48.865689 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfd19db0_a9c1_4aa7_a665_957e97ca991e.slice/crio-37a9227090be4488b450bcbf68a9eb08d9ba234d201cf73e5fe50319f24163dc WatchSource:0}: Error finding container 37a9227090be4488b450bcbf68a9eb08d9ba234d201cf73e5fe50319f24163dc: Status 404 returned error can't find the container with id 37a9227090be4488b450bcbf68a9eb08d9ba234d201cf73e5fe50319f24163dc Jan 29 12:34:49 crc kubenswrapper[4593]: I0129 12:34:49.276155 4593 generic.go:334] "Generic (PLEG): container finished" podID="dfd19db0-a9c1-4aa7-a665-957e97ca991e" containerID="c37eb1712e814a3c862e1d1e8797c89651982d35a646d9cb0ec9148ed8453b9e" exitCode=0 Jan 29 12:34:49 crc kubenswrapper[4593]: I0129 12:34:49.276229 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkpl9" event={"ID":"dfd19db0-a9c1-4aa7-a665-957e97ca991e","Type":"ContainerDied","Data":"c37eb1712e814a3c862e1d1e8797c89651982d35a646d9cb0ec9148ed8453b9e"} Jan 29 12:34:49 crc kubenswrapper[4593]: I0129 12:34:49.276262 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkpl9" event={"ID":"dfd19db0-a9c1-4aa7-a665-957e97ca991e","Type":"ContainerStarted","Data":"37a9227090be4488b450bcbf68a9eb08d9ba234d201cf73e5fe50319f24163dc"} Jan 29 12:34:50 crc kubenswrapper[4593]: I0129 12:34:50.287071 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2lqd" event={"ID":"092caf89-afd5-4bc4-aa5b-afa0b8583122","Type":"ContainerStarted","Data":"7e82a241caeb9d763f0230669c4ac7dd36408cffa64f71e2fea231d72969af70"} Jan 29 12:34:53 crc kubenswrapper[4593]: I0129 12:34:53.315665 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkpl9" event={"ID":"dfd19db0-a9c1-4aa7-a665-957e97ca991e","Type":"ContainerStarted","Data":"214d60fd41eae04a94f31d07eb3bb60c158fa46d3892b0f1769ba4ba59e7194e"} Jan 29 12:34:53 crc kubenswrapper[4593]: I0129 12:34:53.318702 4593 generic.go:334] "Generic (PLEG): container finished" podID="092caf89-afd5-4bc4-aa5b-afa0b8583122" containerID="7e82a241caeb9d763f0230669c4ac7dd36408cffa64f71e2fea231d72969af70" exitCode=0 Jan 29 12:34:53 crc kubenswrapper[4593]: I0129 12:34:53.318759 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2lqd" event={"ID":"092caf89-afd5-4bc4-aa5b-afa0b8583122","Type":"ContainerDied","Data":"7e82a241caeb9d763f0230669c4ac7dd36408cffa64f71e2fea231d72969af70"} Jan 29 12:34:54 crc kubenswrapper[4593]: I0129 12:34:54.331278 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2lqd" event={"ID":"092caf89-afd5-4bc4-aa5b-afa0b8583122","Type":"ContainerStarted","Data":"af5f41d7b6aa735c7e64450ded8184aea35e3b30170b62352d04aa8eee2dd27b"} Jan 29 12:34:54 crc kubenswrapper[4593]: I0129 12:34:54.372502 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c2lqd" podStartSLOduration=2.748048947 podStartE2EDuration="8.37247321s" podCreationTimestamp="2026-01-29 12:34:46 +0000 UTC" firstStartedPulling="2026-01-29 12:34:48.258598419 +0000 UTC m=+5754.131632610" lastFinishedPulling="2026-01-29 12:34:53.883022672 +0000 UTC m=+5759.756056873" observedRunningTime="2026-01-29 12:34:54.362791948 +0000 UTC m=+5760.235826159" watchObservedRunningTime="2026-01-29 12:34:54.37247321 +0000 UTC m=+5760.245507401" Jan 29 12:34:55 crc kubenswrapper[4593]: I0129 12:34:55.348336 4593 generic.go:334] "Generic (PLEG): container finished" podID="dfd19db0-a9c1-4aa7-a665-957e97ca991e" containerID="214d60fd41eae04a94f31d07eb3bb60c158fa46d3892b0f1769ba4ba59e7194e" exitCode=0 Jan 29 12:34:55 crc kubenswrapper[4593]: I0129 12:34:55.351109 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkpl9" event={"ID":"dfd19db0-a9c1-4aa7-a665-957e97ca991e","Type":"ContainerDied","Data":"214d60fd41eae04a94f31d07eb3bb60c158fa46d3892b0f1769ba4ba59e7194e"} Jan 29 12:34:56 crc kubenswrapper[4593]: I0129 12:34:56.362122 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkpl9" event={"ID":"dfd19db0-a9c1-4aa7-a665-957e97ca991e","Type":"ContainerStarted","Data":"4287add40041bf91d6ad0a5de239ea992b63c42f8b800bb41aa810181161a9a5"} Jan 29 12:34:56 crc kubenswrapper[4593]: I0129 12:34:56.385130 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hkpl9" podStartSLOduration=2.722455474 podStartE2EDuration="9.385111384s" podCreationTimestamp="2026-01-29 12:34:47 +0000 UTC" firstStartedPulling="2026-01-29 12:34:49.278898804 +0000 UTC m=+5755.151932995" lastFinishedPulling="2026-01-29 12:34:55.941554704 +0000 UTC m=+5761.814588905" observedRunningTime="2026-01-29 12:34:56.38127013 +0000 UTC m=+5762.254304331" watchObservedRunningTime="2026-01-29 12:34:56.385111384 +0000 UTC m=+5762.258145575" Jan 29 12:34:57 crc kubenswrapper[4593]: I0129 12:34:57.258986 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:34:57 crc kubenswrapper[4593]: I0129 12:34:57.259769 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:34:57 crc kubenswrapper[4593]: I0129 12:34:57.304470 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:34:58 crc kubenswrapper[4593]: I0129 12:34:58.352016 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:34:58 crc kubenswrapper[4593]: I0129 12:34:58.353900 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:34:58 crc kubenswrapper[4593]: I0129 12:34:58.399237 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:35:03 crc kubenswrapper[4593]: I0129 12:35:03.946029 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:35:03 crc kubenswrapper[4593]: I0129 12:35:03.946552 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:35:07 crc kubenswrapper[4593]: I0129 12:35:07.324170 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:35:07 crc kubenswrapper[4593]: I0129 12:35:07.382689 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c2lqd"] Jan 29 12:35:07 crc kubenswrapper[4593]: I0129 12:35:07.465592 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c2lqd" podUID="092caf89-afd5-4bc4-aa5b-afa0b8583122" containerName="registry-server" containerID="cri-o://af5f41d7b6aa735c7e64450ded8184aea35e3b30170b62352d04aa8eee2dd27b" gracePeriod=2 Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.257357 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.400172 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.409704 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/092caf89-afd5-4bc4-aa5b-afa0b8583122-utilities\") pod \"092caf89-afd5-4bc4-aa5b-afa0b8583122\" (UID: \"092caf89-afd5-4bc4-aa5b-afa0b8583122\") " Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.411176 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/092caf89-afd5-4bc4-aa5b-afa0b8583122-catalog-content\") pod \"092caf89-afd5-4bc4-aa5b-afa0b8583122\" (UID: \"092caf89-afd5-4bc4-aa5b-afa0b8583122\") " Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.411416 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrss7\" (UniqueName: \"kubernetes.io/projected/092caf89-afd5-4bc4-aa5b-afa0b8583122-kube-api-access-wrss7\") pod \"092caf89-afd5-4bc4-aa5b-afa0b8583122\" (UID: \"092caf89-afd5-4bc4-aa5b-afa0b8583122\") " Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.411139 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/092caf89-afd5-4bc4-aa5b-afa0b8583122-utilities" (OuterVolumeSpecName: "utilities") pod "092caf89-afd5-4bc4-aa5b-afa0b8583122" (UID: "092caf89-afd5-4bc4-aa5b-afa0b8583122"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.423010 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/092caf89-afd5-4bc4-aa5b-afa0b8583122-kube-api-access-wrss7" (OuterVolumeSpecName: "kube-api-access-wrss7") pod "092caf89-afd5-4bc4-aa5b-afa0b8583122" (UID: "092caf89-afd5-4bc4-aa5b-afa0b8583122"). InnerVolumeSpecName "kube-api-access-wrss7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.477500 4593 generic.go:334] "Generic (PLEG): container finished" podID="092caf89-afd5-4bc4-aa5b-afa0b8583122" containerID="af5f41d7b6aa735c7e64450ded8184aea35e3b30170b62352d04aa8eee2dd27b" exitCode=0 Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.477556 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2lqd" event={"ID":"092caf89-afd5-4bc4-aa5b-afa0b8583122","Type":"ContainerDied","Data":"af5f41d7b6aa735c7e64450ded8184aea35e3b30170b62352d04aa8eee2dd27b"} Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.477588 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2lqd" event={"ID":"092caf89-afd5-4bc4-aa5b-afa0b8583122","Type":"ContainerDied","Data":"925fae481b629ccb1893d79864a8245208c10343beb67fe181c165267988eb8c"} Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.477605 4593 scope.go:117] "RemoveContainer" containerID="af5f41d7b6aa735c7e64450ded8184aea35e3b30170b62352d04aa8eee2dd27b" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.477813 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.480709 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/092caf89-afd5-4bc4-aa5b-afa0b8583122-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "092caf89-afd5-4bc4-aa5b-afa0b8583122" (UID: "092caf89-afd5-4bc4-aa5b-afa0b8583122"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.502797 4593 scope.go:117] "RemoveContainer" containerID="7e82a241caeb9d763f0230669c4ac7dd36408cffa64f71e2fea231d72969af70" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.513141 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/092caf89-afd5-4bc4-aa5b-afa0b8583122-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.513190 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/092caf89-afd5-4bc4-aa5b-afa0b8583122-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.513206 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrss7\" (UniqueName: \"kubernetes.io/projected/092caf89-afd5-4bc4-aa5b-afa0b8583122-kube-api-access-wrss7\") on node \"crc\" DevicePath \"\"" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.527839 4593 scope.go:117] "RemoveContainer" containerID="0e99862554d61960d63664a42ceb2a683f70b91d8ed18fd30cacaf30e90da0e5" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.587124 4593 scope.go:117] "RemoveContainer" containerID="af5f41d7b6aa735c7e64450ded8184aea35e3b30170b62352d04aa8eee2dd27b" Jan 29 12:35:08 crc kubenswrapper[4593]: E0129 12:35:08.588461 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af5f41d7b6aa735c7e64450ded8184aea35e3b30170b62352d04aa8eee2dd27b\": container with ID starting with af5f41d7b6aa735c7e64450ded8184aea35e3b30170b62352d04aa8eee2dd27b not found: ID does not exist" containerID="af5f41d7b6aa735c7e64450ded8184aea35e3b30170b62352d04aa8eee2dd27b" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.588536 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af5f41d7b6aa735c7e64450ded8184aea35e3b30170b62352d04aa8eee2dd27b"} err="failed to get container status \"af5f41d7b6aa735c7e64450ded8184aea35e3b30170b62352d04aa8eee2dd27b\": rpc error: code = NotFound desc = could not find container \"af5f41d7b6aa735c7e64450ded8184aea35e3b30170b62352d04aa8eee2dd27b\": container with ID starting with af5f41d7b6aa735c7e64450ded8184aea35e3b30170b62352d04aa8eee2dd27b not found: ID does not exist" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.588572 4593 scope.go:117] "RemoveContainer" containerID="7e82a241caeb9d763f0230669c4ac7dd36408cffa64f71e2fea231d72969af70" Jan 29 12:35:08 crc kubenswrapper[4593]: E0129 12:35:08.589532 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e82a241caeb9d763f0230669c4ac7dd36408cffa64f71e2fea231d72969af70\": container with ID starting with 7e82a241caeb9d763f0230669c4ac7dd36408cffa64f71e2fea231d72969af70 not found: ID does not exist" containerID="7e82a241caeb9d763f0230669c4ac7dd36408cffa64f71e2fea231d72969af70" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.589596 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e82a241caeb9d763f0230669c4ac7dd36408cffa64f71e2fea231d72969af70"} err="failed to get container status \"7e82a241caeb9d763f0230669c4ac7dd36408cffa64f71e2fea231d72969af70\": rpc error: code = NotFound desc = could not find container \"7e82a241caeb9d763f0230669c4ac7dd36408cffa64f71e2fea231d72969af70\": container with ID starting with 7e82a241caeb9d763f0230669c4ac7dd36408cffa64f71e2fea231d72969af70 not found: ID does not exist" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.589659 4593 scope.go:117] "RemoveContainer" containerID="0e99862554d61960d63664a42ceb2a683f70b91d8ed18fd30cacaf30e90da0e5" Jan 29 12:35:08 crc kubenswrapper[4593]: E0129 12:35:08.590069 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e99862554d61960d63664a42ceb2a683f70b91d8ed18fd30cacaf30e90da0e5\": container with ID starting with 0e99862554d61960d63664a42ceb2a683f70b91d8ed18fd30cacaf30e90da0e5 not found: ID does not exist" containerID="0e99862554d61960d63664a42ceb2a683f70b91d8ed18fd30cacaf30e90da0e5" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.590118 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e99862554d61960d63664a42ceb2a683f70b91d8ed18fd30cacaf30e90da0e5"} err="failed to get container status \"0e99862554d61960d63664a42ceb2a683f70b91d8ed18fd30cacaf30e90da0e5\": rpc error: code = NotFound desc = could not find container \"0e99862554d61960d63664a42ceb2a683f70b91d8ed18fd30cacaf30e90da0e5\": container with ID starting with 0e99862554d61960d63664a42ceb2a683f70b91d8ed18fd30cacaf30e90da0e5 not found: ID does not exist" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.812098 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c2lqd"] Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.820176 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c2lqd"] Jan 29 12:35:09 crc kubenswrapper[4593]: I0129 12:35:09.091065 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="092caf89-afd5-4bc4-aa5b-afa0b8583122" path="/var/lib/kubelet/pods/092caf89-afd5-4bc4-aa5b-afa0b8583122/volumes" Jan 29 12:35:10 crc kubenswrapper[4593]: I0129 12:35:10.769865 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkpl9"] Jan 29 12:35:10 crc kubenswrapper[4593]: I0129 12:35:10.770571 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hkpl9" podUID="dfd19db0-a9c1-4aa7-a665-957e97ca991e" containerName="registry-server" containerID="cri-o://4287add40041bf91d6ad0a5de239ea992b63c42f8b800bb41aa810181161a9a5" gracePeriod=2 Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.296507 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.320485 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfd19db0-a9c1-4aa7-a665-957e97ca991e-catalog-content\") pod \"dfd19db0-a9c1-4aa7-a665-957e97ca991e\" (UID: \"dfd19db0-a9c1-4aa7-a665-957e97ca991e\") " Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.320572 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfd19db0-a9c1-4aa7-a665-957e97ca991e-utilities\") pod \"dfd19db0-a9c1-4aa7-a665-957e97ca991e\" (UID: \"dfd19db0-a9c1-4aa7-a665-957e97ca991e\") " Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.320815 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkjkq\" (UniqueName: \"kubernetes.io/projected/dfd19db0-a9c1-4aa7-a665-957e97ca991e-kube-api-access-nkjkq\") pod \"dfd19db0-a9c1-4aa7-a665-957e97ca991e\" (UID: \"dfd19db0-a9c1-4aa7-a665-957e97ca991e\") " Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.321471 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfd19db0-a9c1-4aa7-a665-957e97ca991e-utilities" (OuterVolumeSpecName: "utilities") pod "dfd19db0-a9c1-4aa7-a665-957e97ca991e" (UID: "dfd19db0-a9c1-4aa7-a665-957e97ca991e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.327205 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfd19db0-a9c1-4aa7-a665-957e97ca991e-kube-api-access-nkjkq" (OuterVolumeSpecName: "kube-api-access-nkjkq") pod "dfd19db0-a9c1-4aa7-a665-957e97ca991e" (UID: "dfd19db0-a9c1-4aa7-a665-957e97ca991e"). InnerVolumeSpecName "kube-api-access-nkjkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.367368 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfd19db0-a9c1-4aa7-a665-957e97ca991e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dfd19db0-a9c1-4aa7-a665-957e97ca991e" (UID: "dfd19db0-a9c1-4aa7-a665-957e97ca991e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.422822 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkjkq\" (UniqueName: \"kubernetes.io/projected/dfd19db0-a9c1-4aa7-a665-957e97ca991e-kube-api-access-nkjkq\") on node \"crc\" DevicePath \"\"" Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.422867 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfd19db0-a9c1-4aa7-a665-957e97ca991e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.422879 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfd19db0-a9c1-4aa7-a665-957e97ca991e-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.511896 4593 generic.go:334] "Generic (PLEG): container finished" podID="dfd19db0-a9c1-4aa7-a665-957e97ca991e" containerID="4287add40041bf91d6ad0a5de239ea992b63c42f8b800bb41aa810181161a9a5" exitCode=0 Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.511953 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkpl9" event={"ID":"dfd19db0-a9c1-4aa7-a665-957e97ca991e","Type":"ContainerDied","Data":"4287add40041bf91d6ad0a5de239ea992b63c42f8b800bb41aa810181161a9a5"} Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.511996 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.512035 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkpl9" event={"ID":"dfd19db0-a9c1-4aa7-a665-957e97ca991e","Type":"ContainerDied","Data":"37a9227090be4488b450bcbf68a9eb08d9ba234d201cf73e5fe50319f24163dc"} Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.512061 4593 scope.go:117] "RemoveContainer" containerID="4287add40041bf91d6ad0a5de239ea992b63c42f8b800bb41aa810181161a9a5" Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.541899 4593 scope.go:117] "RemoveContainer" containerID="214d60fd41eae04a94f31d07eb3bb60c158fa46d3892b0f1769ba4ba59e7194e" Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.557871 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkpl9"] Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.594386 4593 scope.go:117] "RemoveContainer" containerID="c37eb1712e814a3c862e1d1e8797c89651982d35a646d9cb0ec9148ed8453b9e" Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.605008 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkpl9"] Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.622979 4593 scope.go:117] "RemoveContainer" containerID="4287add40041bf91d6ad0a5de239ea992b63c42f8b800bb41aa810181161a9a5" Jan 29 12:35:11 crc kubenswrapper[4593]: E0129 12:35:11.623550 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4287add40041bf91d6ad0a5de239ea992b63c42f8b800bb41aa810181161a9a5\": container with ID starting with 4287add40041bf91d6ad0a5de239ea992b63c42f8b800bb41aa810181161a9a5 not found: ID does not exist" containerID="4287add40041bf91d6ad0a5de239ea992b63c42f8b800bb41aa810181161a9a5" Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.623585 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4287add40041bf91d6ad0a5de239ea992b63c42f8b800bb41aa810181161a9a5"} err="failed to get container status \"4287add40041bf91d6ad0a5de239ea992b63c42f8b800bb41aa810181161a9a5\": rpc error: code = NotFound desc = could not find container \"4287add40041bf91d6ad0a5de239ea992b63c42f8b800bb41aa810181161a9a5\": container with ID starting with 4287add40041bf91d6ad0a5de239ea992b63c42f8b800bb41aa810181161a9a5 not found: ID does not exist" Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.623608 4593 scope.go:117] "RemoveContainer" containerID="214d60fd41eae04a94f31d07eb3bb60c158fa46d3892b0f1769ba4ba59e7194e" Jan 29 12:35:11 crc kubenswrapper[4593]: E0129 12:35:11.623881 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"214d60fd41eae04a94f31d07eb3bb60c158fa46d3892b0f1769ba4ba59e7194e\": container with ID starting with 214d60fd41eae04a94f31d07eb3bb60c158fa46d3892b0f1769ba4ba59e7194e not found: ID does not exist" containerID="214d60fd41eae04a94f31d07eb3bb60c158fa46d3892b0f1769ba4ba59e7194e" Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.623917 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"214d60fd41eae04a94f31d07eb3bb60c158fa46d3892b0f1769ba4ba59e7194e"} err="failed to get container status \"214d60fd41eae04a94f31d07eb3bb60c158fa46d3892b0f1769ba4ba59e7194e\": rpc error: code = NotFound desc = could not find container \"214d60fd41eae04a94f31d07eb3bb60c158fa46d3892b0f1769ba4ba59e7194e\": container with ID starting with 214d60fd41eae04a94f31d07eb3bb60c158fa46d3892b0f1769ba4ba59e7194e not found: ID does not exist" Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.623949 4593 scope.go:117] "RemoveContainer" containerID="c37eb1712e814a3c862e1d1e8797c89651982d35a646d9cb0ec9148ed8453b9e" Jan 29 12:35:11 crc kubenswrapper[4593]: E0129 12:35:11.624457 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c37eb1712e814a3c862e1d1e8797c89651982d35a646d9cb0ec9148ed8453b9e\": container with ID starting with c37eb1712e814a3c862e1d1e8797c89651982d35a646d9cb0ec9148ed8453b9e not found: ID does not exist" containerID="c37eb1712e814a3c862e1d1e8797c89651982d35a646d9cb0ec9148ed8453b9e" Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.624501 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c37eb1712e814a3c862e1d1e8797c89651982d35a646d9cb0ec9148ed8453b9e"} err="failed to get container status \"c37eb1712e814a3c862e1d1e8797c89651982d35a646d9cb0ec9148ed8453b9e\": rpc error: code = NotFound desc = could not find container \"c37eb1712e814a3c862e1d1e8797c89651982d35a646d9cb0ec9148ed8453b9e\": container with ID starting with c37eb1712e814a3c862e1d1e8797c89651982d35a646d9cb0ec9148ed8453b9e not found: ID does not exist" Jan 29 12:35:13 crc kubenswrapper[4593]: I0129 12:35:13.088041 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfd19db0-a9c1-4aa7-a665-957e97ca991e" path="/var/lib/kubelet/pods/dfd19db0-a9c1-4aa7-a665-957e97ca991e/volumes" Jan 29 12:35:33 crc kubenswrapper[4593]: I0129 12:35:33.945666 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:35:33 crc kubenswrapper[4593]: I0129 12:35:33.946225 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:35:33 crc kubenswrapper[4593]: I0129 12:35:33.946290 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 12:35:33 crc kubenswrapper[4593]: I0129 12:35:33.947100 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f8b1c574af947fa11ffe9b5caa5a417f8805b37c95e5b710480d0cd19a6f323f"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:35:33 crc kubenswrapper[4593]: I0129 12:35:33.947188 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://f8b1c574af947fa11ffe9b5caa5a417f8805b37c95e5b710480d0cd19a6f323f" gracePeriod=600 Jan 29 12:35:34 crc kubenswrapper[4593]: I0129 12:35:34.746537 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="f8b1c574af947fa11ffe9b5caa5a417f8805b37c95e5b710480d0cd19a6f323f" exitCode=0 Jan 29 12:35:34 crc kubenswrapper[4593]: I0129 12:35:34.746585 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"f8b1c574af947fa11ffe9b5caa5a417f8805b37c95e5b710480d0cd19a6f323f"} Jan 29 12:35:34 crc kubenswrapper[4593]: I0129 12:35:34.746981 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"6b515c98bc904e1b309f647418f96aa9ffe74921bccaa9ccb23cdbcb47a4d89e"} Jan 29 12:35:34 crc kubenswrapper[4593]: I0129 12:35:34.747030 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f"